00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4085 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3675 00:00:00.000 originally caused by: 00:00:00.001 Started by timer 00:00:00.125 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.126 The recommended git tool is: git 00:00:00.126 using credential 00000000-0000-0000-0000-000000000002 00:00:00.128 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.194 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.255 Using shallow fetch with depth 1 00:00:00.255 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.255 > git --version # timeout=10 00:00:00.306 > git --version # 'git version 2.39.2' 00:00:00.306 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.332 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.332 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.750 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.763 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.775 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.775 > git config core.sparsecheckout # timeout=10 00:00:06.786 > git read-tree -mu HEAD # timeout=10 00:00:06.801 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.827 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.827 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.903 [Pipeline] Start of Pipeline 00:00:06.919 [Pipeline] library 00:00:06.921 Loading library shm_lib@master 00:00:06.921 Library shm_lib@master is cached. Copying from home. 00:00:06.939 [Pipeline] node 00:00:06.950 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.952 [Pipeline] { 00:00:06.963 [Pipeline] catchError 00:00:06.964 [Pipeline] { 00:00:06.976 [Pipeline] wrap 00:00:06.984 [Pipeline] { 00:00:06.992 [Pipeline] stage 00:00:06.994 [Pipeline] { (Prologue) 00:00:07.266 [Pipeline] sh 00:00:07.558 + logger -p user.info -t JENKINS-CI 00:00:07.578 [Pipeline] echo 00:00:07.580 Node: CYP9 00:00:07.590 [Pipeline] sh 00:00:07.903 [Pipeline] setCustomBuildProperty 00:00:07.915 [Pipeline] echo 00:00:07.916 Cleanup processes 00:00:07.920 [Pipeline] sh 00:00:08.206 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.206 3038997 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.221 [Pipeline] sh 00:00:08.513 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.514 ++ grep -v 'sudo pgrep' 00:00:08.514 ++ awk '{print $1}' 00:00:08.514 + sudo kill -9 00:00:08.514 + true 00:00:08.529 [Pipeline] cleanWs 00:00:08.538 [WS-CLEANUP] Deleting project workspace... 00:00:08.538 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.544 [WS-CLEANUP] done 00:00:08.547 [Pipeline] setCustomBuildProperty 00:00:08.558 [Pipeline] sh 00:00:08.845 + sudo git config --global --replace-all safe.directory '*' 00:00:08.957 [Pipeline] httpRequest 00:00:10.210 [Pipeline] echo 00:00:10.212 Sorcerer 10.211.164.20 is alive 00:00:10.223 [Pipeline] retry 00:00:10.225 [Pipeline] { 00:00:10.239 [Pipeline] httpRequest 00:00:10.244 HttpMethod: GET 00:00:10.244 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.245 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.267 Response Code: HTTP/1.1 200 OK 00:00:10.267 Success: Status code 200 is in the accepted range: 200,404 00:00:10.268 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.603 [Pipeline] } 00:00:14.621 [Pipeline] // retry 00:00:14.629 [Pipeline] sh 00:00:14.920 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.939 [Pipeline] httpRequest 00:00:15.377 [Pipeline] echo 00:00:15.379 Sorcerer 10.211.164.20 is alive 00:00:15.390 [Pipeline] retry 00:00:15.392 [Pipeline] { 00:00:15.408 [Pipeline] httpRequest 00:00:15.413 HttpMethod: GET 00:00:15.414 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:15.414 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:15.433 Response Code: HTTP/1.1 200 OK 00:00:15.433 Success: Status code 200 is in the accepted range: 200,404 00:00:15.434 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:02:09.021 [Pipeline] } 00:02:09.040 [Pipeline] // retry 00:02:09.049 [Pipeline] sh 00:02:09.340 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:02:12.661 [Pipeline] sh 00:02:12.954 + git -C spdk log --oneline -n5 00:02:12.954 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:12.954 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:12.954 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:12.954 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:02:12.954 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:12.978 [Pipeline] withCredentials 00:02:12.992 > git --version # timeout=10 00:02:13.006 > git --version # 'git version 2.39.2' 00:02:13.028 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:13.031 [Pipeline] { 00:02:13.041 [Pipeline] retry 00:02:13.043 [Pipeline] { 00:02:13.061 [Pipeline] sh 00:02:13.357 + git ls-remote http://dpdk.org/git/dpdk main 00:02:35.357 [Pipeline] } 00:02:35.377 [Pipeline] // retry 00:02:35.382 [Pipeline] } 00:02:35.398 [Pipeline] // withCredentials 00:02:35.408 [Pipeline] httpRequest 00:02:35.723 [Pipeline] echo 00:02:35.725 Sorcerer 10.211.164.20 is alive 00:02:35.736 [Pipeline] retry 00:02:35.739 [Pipeline] { 00:02:35.753 [Pipeline] httpRequest 00:02:35.758 HttpMethod: GET 00:02:35.759 URL: http://10.211.164.20/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:35.759 Sending request to url: http://10.211.164.20/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:35.762 Response Code: HTTP/1.1 200 OK 00:02:35.762 Success: Status code 200 is in the accepted range: 200,404 00:02:35.763 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:37.539 [Pipeline] } 00:02:37.557 [Pipeline] // retry 00:02:37.567 [Pipeline] sh 00:02:37.859 + tar --no-same-owner -xf dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:39.803 [Pipeline] sh 00:02:40.091 + git -C dpdk log --oneline -n5 00:02:40.091 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:02:40.091 a4f455560f version: 24.11-rc4 00:02:40.091 0c81db5870 dts: remove leftover node methods 00:02:40.091 71eae7fe3e doc: correct definition of stats per queue feature 00:02:40.091 f2b1510f19 net/octeon_ep: replace use of word segregate 00:02:40.101 [Pipeline] } 00:02:40.114 [Pipeline] // stage 00:02:40.122 [Pipeline] stage 00:02:40.124 [Pipeline] { (Prepare) 00:02:40.141 [Pipeline] writeFile 00:02:40.156 [Pipeline] sh 00:02:40.444 + logger -p user.info -t JENKINS-CI 00:02:40.459 [Pipeline] sh 00:02:40.748 + logger -p user.info -t JENKINS-CI 00:02:40.762 [Pipeline] sh 00:02:41.053 + cat autorun-spdk.conf 00:02:41.053 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:41.053 SPDK_TEST_NVMF=1 00:02:41.053 SPDK_TEST_NVME_CLI=1 00:02:41.053 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:41.053 SPDK_TEST_NVMF_NICS=e810 00:02:41.053 SPDK_TEST_VFIOUSER=1 00:02:41.053 SPDK_RUN_UBSAN=1 00:02:41.053 NET_TYPE=phy 00:02:41.053 SPDK_TEST_NATIVE_DPDK=main 00:02:41.053 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:41.062 RUN_NIGHTLY=1 00:02:41.067 [Pipeline] readFile 00:02:41.091 [Pipeline] withEnv 00:02:41.093 [Pipeline] { 00:02:41.105 [Pipeline] sh 00:02:41.391 + set -ex 00:02:41.391 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:41.391 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:41.391 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:41.391 ++ SPDK_TEST_NVMF=1 00:02:41.391 ++ SPDK_TEST_NVME_CLI=1 00:02:41.391 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:41.391 ++ SPDK_TEST_NVMF_NICS=e810 00:02:41.391 ++ SPDK_TEST_VFIOUSER=1 00:02:41.391 ++ SPDK_RUN_UBSAN=1 00:02:41.391 ++ NET_TYPE=phy 00:02:41.391 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:41.391 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:41.391 ++ RUN_NIGHTLY=1 00:02:41.391 + case $SPDK_TEST_NVMF_NICS in 00:02:41.391 + DRIVERS=ice 00:02:41.391 + [[ tcp == \r\d\m\a ]] 00:02:41.391 + [[ -n ice ]] 00:02:41.391 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:41.391 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:41.391 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:41.391 rmmod: ERROR: Module irdma is not currently loaded 00:02:41.391 rmmod: ERROR: Module i40iw is not currently loaded 00:02:41.391 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:41.391 + true 00:02:41.391 + for D in $DRIVERS 00:02:41.391 + sudo modprobe ice 00:02:41.391 + exit 0 00:02:41.504 [Pipeline] } 00:02:41.518 [Pipeline] // withEnv 00:02:41.523 [Pipeline] } 00:02:41.536 [Pipeline] // stage 00:02:41.547 [Pipeline] catchError 00:02:41.548 [Pipeline] { 00:02:41.564 [Pipeline] timeout 00:02:41.564 Timeout set to expire in 1 hr 0 min 00:02:41.566 [Pipeline] { 00:02:41.580 [Pipeline] stage 00:02:41.582 [Pipeline] { (Tests) 00:02:41.596 [Pipeline] sh 00:02:41.905 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.905 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.905 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:41.905 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.905 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:41.905 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.905 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.905 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:41.905 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.905 + source /etc/os-release 00:02:41.905 ++ NAME='Fedora Linux' 00:02:41.905 ++ VERSION='39 (Cloud Edition)' 00:02:41.905 ++ ID=fedora 00:02:41.905 ++ VERSION_ID=39 00:02:41.905 ++ VERSION_CODENAME= 00:02:41.905 ++ PLATFORM_ID=platform:f39 00:02:41.905 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:41.905 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.905 ++ LOGO=fedora-logo-icon 00:02:41.905 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:41.905 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.905 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:41.905 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.905 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.905 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.905 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:41.905 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.905 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:41.905 ++ SUPPORT_END=2024-11-12 00:02:41.905 ++ VARIANT='Cloud Edition' 00:02:41.905 ++ VARIANT_ID=cloud 00:02:41.905 + uname -a 00:02:41.905 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:41.905 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:45.208 Hugepages 00:02:45.208 node hugesize free / total 00:02:45.208 node0 1048576kB 0 / 0 00:02:45.208 node0 2048kB 0 / 0 00:02:45.208 node1 1048576kB 0 / 0 00:02:45.208 node1 2048kB 0 / 0 00:02:45.208 00:02:45.208 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.208 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:45.208 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:45.208 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:45.208 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:45.208 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:45.208 + rm -f /tmp/spdk-ld-path 00:02:45.208 + source autorun-spdk.conf 00:02:45.208 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.208 ++ SPDK_TEST_NVMF=1 00:02:45.208 ++ SPDK_TEST_NVME_CLI=1 00:02:45.208 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:45.208 ++ SPDK_TEST_NVMF_NICS=e810 00:02:45.208 ++ SPDK_TEST_VFIOUSER=1 00:02:45.208 ++ SPDK_RUN_UBSAN=1 00:02:45.208 ++ NET_TYPE=phy 00:02:45.208 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:45.208 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:45.208 ++ RUN_NIGHTLY=1 00:02:45.209 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:45.209 + [[ -n '' ]] 00:02:45.209 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.209 + for M in /var/spdk/build-*-manifest.txt 00:02:45.209 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:45.209 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:45.209 + for M in /var/spdk/build-*-manifest.txt 00:02:45.209 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:45.209 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:45.209 + for M in /var/spdk/build-*-manifest.txt 00:02:45.209 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:45.209 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:45.209 ++ uname 00:02:45.209 + [[ Linux == \L\i\n\u\x ]] 00:02:45.209 + sudo dmesg -T 00:02:45.209 + sudo dmesg --clear 00:02:45.209 + dmesg_pid=3040601 00:02:45.209 + [[ Fedora Linux == FreeBSD ]] 00:02:45.209 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:45.209 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:45.209 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:45.209 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:45.209 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:45.209 + [[ -x /usr/src/fio-static/fio ]] 00:02:45.209 + sudo dmesg -Tw 00:02:45.209 + export FIO_BIN=/usr/src/fio-static/fio 00:02:45.209 + FIO_BIN=/usr/src/fio-static/fio 00:02:45.209 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:45.209 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:45.209 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:45.209 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:45.209 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:45.209 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:45.209 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.209 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.209 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:45.470 12:33:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:45.470 12:33:15 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:45.470 12:33:15 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:45.470 12:33:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:45.470 12:33:15 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:45.470 12:33:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:45.470 12:33:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:45.470 12:33:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:45.470 12:33:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:45.470 12:33:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.470 12:33:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.470 12:33:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.470 12:33:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.470 12:33:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.470 12:33:15 -- paths/export.sh@5 -- $ export PATH 00:02:45.470 12:33:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.470 12:33:15 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:45.470 12:33:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:45.470 12:33:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732793595.XXXXXX 00:02:45.470 12:33:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732793595.PfSMFV 00:02:45.470 12:33:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:45.470 12:33:15 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:45.470 12:33:15 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:45.470 12:33:15 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:45.470 12:33:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:45.470 12:33:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:45.470 12:33:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:45.470 12:33:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:45.470 12:33:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.470 12:33:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:45.470 12:33:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:45.470 12:33:15 -- pm/common@17 -- $ local monitor 00:02:45.470 12:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.470 12:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.470 12:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.470 12:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.470 12:33:15 -- pm/common@21 -- $ date +%s 00:02:45.470 12:33:15 -- pm/common@21 -- $ date +%s 00:02:45.470 12:33:15 -- pm/common@25 -- $ sleep 1 00:02:45.470 12:33:15 -- pm/common@21 -- $ date +%s 00:02:45.470 12:33:15 -- pm/common@21 -- $ date +%s 00:02:45.470 12:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793595 00:02:45.470 12:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793595 00:02:45.470 12:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793595 00:02:45.470 12:33:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793595 00:02:45.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793595_collect-cpu-load.pm.log 00:02:45.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793595_collect-vmstat.pm.log 00:02:45.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793595_collect-cpu-temp.pm.log 00:02:45.471 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793595_collect-bmc-pm.bmc.pm.log 00:02:46.415 12:33:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:46.415 12:33:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:46.415 12:33:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:46.415 12:33:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.415 12:33:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:46.415 Thu Nov 28 11:33:16 AM UTC 2024 00:02:46.415 12:33:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:46.415 v25.01-pre-276-g35cd3e84d 00:02:46.415 12:33:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:46.415 12:33:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:46.415 12:33:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:46.415 12:33:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.415 12:33:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.415 12:33:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.677 ************************************ 00:02:46.677 START TEST ubsan 00:02:46.677 ************************************ 00:02:46.677 12:33:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:46.677 using ubsan 00:02:46.677 00:02:46.677 real 0m0.001s 00:02:46.677 user 0m0.000s 00:02:46.677 sys 0m0.000s 00:02:46.677 12:33:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.677 12:33:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.677 ************************************ 00:02:46.677 END TEST ubsan 00:02:46.677 ************************************ 00:02:46.677 12:33:16 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:46.677 12:33:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:46.677 12:33:16 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:46.677 12:33:16 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:46.677 12:33:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.677 12:33:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.677 ************************************ 00:02:46.677 START TEST build_native_dpdk 00:02:46.677 ************************************ 00:02:46.677 12:33:16 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:46.677 12:33:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:46.678 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:02:46.678 a4f455560f version: 24.11-rc4 00:02:46.678 0c81db5870 dts: remove leftover node methods 00:02:46.678 71eae7fe3e doc: correct definition of stats per queue feature 00:02:46.678 f2b1510f19 net/octeon_ep: replace use of word segregate 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc4 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc4 21.11.0 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 21.11.0 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:46.678 patching file config/rte_config.h 00:02:46.678 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc4 24.07.0 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 24.07.0 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:46.678 12:33:16 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc4 24.07.0 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc4 '>=' 24.07.0 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:46.678 12:33:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.679 12:33:16 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:46.679 12:33:16 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:46.679 patching file drivers/bus/pci/linux/pci_uio.c 00:02:46.679 12:33:16 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:46.679 12:33:16 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:46.679 12:33:16 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:46.679 12:33:16 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:46.679 12:33:16 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:53.268 The Meson build system 00:02:53.268 Version: 1.5.0 00:02:53.268 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:53.268 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:53.268 Build type: native build 00:02:53.268 Project name: DPDK 00:02:53.268 Project version: 24.11.0-rc4 00:02:53.268 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.268 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:53.268 Host machine cpu family: x86_64 00:02:53.268 Host machine cpu: x86_64 00:02:53.268 Message: ## Building in Developer Mode ## 00:02:53.268 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.268 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:53.268 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.268 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:53.268 Program cat found: YES (/usr/bin/cat) 00:02:53.268 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:53.268 Compiler for C supports arguments -march=native: YES 00:02:53.268 Checking for size of "void *" : 8 00:02:53.268 Checking for size of "void *" : 8 (cached) 00:02:53.268 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:53.268 Library m found: YES 00:02:53.268 Library numa found: YES 00:02:53.268 Has header "numaif.h" : YES 00:02:53.268 Library fdt found: NO 00:02:53.268 Library execinfo found: NO 00:02:53.268 Has header "execinfo.h" : YES 00:02:53.268 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.268 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.268 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.268 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.268 Run-time dependency openssl found: YES 3.1.1 00:02:53.268 Run-time dependency libpcap found: YES 1.10.4 00:02:53.268 Has header "pcap.h" with dependency libpcap: YES 00:02:53.268 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.268 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.268 Compiler for C supports arguments -Wformat: YES 00:02:53.268 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.268 Compiler for C supports arguments -Wformat-security: NO 00:02:53.268 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.268 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.268 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.268 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.268 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.268 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.268 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.268 Compiler for C supports arguments -Wundef: YES 00:02:53.268 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.268 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.268 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.268 Program objdump found: YES (/usr/bin/objdump) 00:02:53.268 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:53.268 Checking if "AVX512 checking" compiles: YES 00:02:53.268 Fetching value of define "__AVX512F__" : 1 00:02:53.268 Fetching value of define "__AVX512BW__" : 1 00:02:53.268 Fetching value of define "__AVX512DQ__" : 1 00:02:53.268 Fetching value of define "__AVX512VL__" : 1 00:02:53.268 Fetching value of define "__SSE4_2__" : 1 00:02:53.268 Fetching value of define "__AES__" : 1 00:02:53.268 Fetching value of define "__AVX__" : 1 00:02:53.268 Fetching value of define "__AVX2__" : 1 00:02:53.268 Fetching value of define "__AVX512BW__" : 1 00:02:53.268 Fetching value of define "__AVX512CD__" : 1 00:02:53.268 Fetching value of define "__AVX512DQ__" : 1 00:02:53.268 Fetching value of define "__AVX512F__" : 1 00:02:53.268 Fetching value of define "__AVX512VL__" : 1 00:02:53.268 Fetching value of define "__PCLMUL__" : 1 00:02:53.268 Fetching value of define "__RDRND__" : 1 00:02:53.268 Fetching value of define "__RDSEED__" : 1 00:02:53.268 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:53.268 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.268 Message: lib/log: Defining dependency "log" 00:02:53.268 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.268 Message: lib/argparse: Defining dependency "argparse" 00:02:53.268 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.268 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:53.268 Checking for function "getentropy" : NO 00:02:53.268 Message: lib/eal: Defining dependency "eal" 00:02:53.268 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:53.268 Message: lib/ring: Defining dependency "ring" 00:02:53.268 Message: lib/rcu: Defining dependency "rcu" 00:02:53.268 Message: lib/mempool: Defining dependency "mempool" 00:02:53.268 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.268 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.268 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:53.268 Compiler for C supports arguments -mpclmul: YES 00:02:53.268 Compiler for C supports arguments -maes: YES 00:02:53.268 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.268 Message: lib/net: Defining dependency "net" 00:02:53.268 Message: lib/meter: Defining dependency "meter" 00:02:53.268 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.268 Message: lib/pci: Defining dependency "pci" 00:02:53.268 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.268 Message: lib/metrics: Defining dependency "metrics" 00:02:53.268 Message: lib/hash: Defining dependency "hash" 00:02:53.268 Message: lib/timer: Defining dependency "timer" 00:02:53.268 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.268 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.268 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:53.268 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.268 Message: lib/acl: Defining dependency "acl" 00:02:53.268 Message: lib/bbdev: Defining dependency "bbdev" 00:02:53.268 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:53.268 Run-time dependency libelf found: YES 0.191 00:02:53.268 Message: lib/bpf: Defining dependency "bpf" 00:02:53.268 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:53.268 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.268 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.268 Message: lib/distributor: Defining dependency "distributor" 00:02:53.268 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.268 Message: lib/efd: Defining dependency "efd" 00:02:53.268 Message: lib/eventdev: Defining dependency "eventdev" 00:02:53.268 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:53.269 Message: lib/gpudev: Defining dependency "gpudev" 00:02:53.269 Message: lib/gro: Defining dependency "gro" 00:02:53.269 Message: lib/gso: Defining dependency "gso" 00:02:53.269 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:53.269 Message: lib/jobstats: Defining dependency "jobstats" 00:02:53.269 Message: lib/latencystats: Defining dependency "latencystats" 00:02:53.269 Message: lib/lpm: Defining dependency "lpm" 00:02:53.269 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.269 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.269 Fetching value of define "__AVX512IFMA__" : 1 00:02:53.269 Message: lib/member: Defining dependency "member" 00:02:53.269 Message: lib/pcapng: Defining dependency "pcapng" 00:02:53.269 Message: lib/power: Defining dependency "power" 00:02:53.269 Message: lib/rawdev: Defining dependency "rawdev" 00:02:53.269 Message: lib/regexdev: Defining dependency "regexdev" 00:02:53.269 Message: lib/mldev: Defining dependency "mldev" 00:02:53.269 Message: lib/rib: Defining dependency "rib" 00:02:53.269 Message: lib/reorder: Defining dependency "reorder" 00:02:53.269 Message: lib/sched: Defining dependency "sched" 00:02:53.269 Message: lib/security: Defining dependency "security" 00:02:53.269 Message: lib/stack: Defining dependency "stack" 00:02:53.269 Has header "linux/userfaultfd.h" : YES 00:02:53.269 Has header "linux/vduse.h" : YES 00:02:53.269 Message: lib/vhost: Defining dependency "vhost" 00:02:53.269 Message: lib/ipsec: Defining dependency "ipsec" 00:02:53.269 Message: lib/pdcp: Defining dependency "pdcp" 00:02:53.269 Message: lib/fib: Defining dependency "fib" 00:02:53.269 Message: lib/port: Defining dependency "port" 00:02:53.269 Message: lib/pdump: Defining dependency "pdump" 00:02:53.269 Message: lib/table: Defining dependency "table" 00:02:53.269 Message: lib/pipeline: Defining dependency "pipeline" 00:02:53.269 Message: lib/graph: Defining dependency "graph" 00:02:53.269 Message: lib/node: Defining dependency "node" 00:02:53.269 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.269 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.269 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.269 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:53.269 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:53.269 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:53.269 Compiler for C supports arguments -Wno-unused-value: YES 00:02:53.269 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:53.269 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:53.269 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:53.269 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:53.269 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:53.269 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:53.269 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:53.269 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:53.269 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:53.269 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:53.269 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:53.269 Has header "sys/epoll.h" : YES 00:02:53.269 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:53.269 Configuring doxy-api-html.conf using configuration 00:02:53.269 Configuring doxy-api-man.conf using configuration 00:02:53.269 Program mandb found: YES (/usr/bin/mandb) 00:02:53.269 Program sphinx-build found: NO 00:02:53.269 Program sphinx-build found: NO 00:02:53.269 Configuring rte_build_config.h using configuration 00:02:53.269 Message: 00:02:53.269 ================= 00:02:53.269 Applications Enabled 00:02:53.269 ================= 00:02:53.269 00:02:53.269 apps: 00:02:53.269 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:53.269 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:53.269 test-pmd, test-regex, test-sad, test-security-perf, 00:02:53.269 00:02:53.269 Message: 00:02:53.269 ================= 00:02:53.269 Libraries Enabled 00:02:53.269 ================= 00:02:53.269 00:02:53.269 libs: 00:02:53.269 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:53.269 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:53.269 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:53.269 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:53.269 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:53.269 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:53.269 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:53.269 graph, node, 00:02:53.269 00:02:53.269 Message: 00:02:53.269 =============== 00:02:53.269 Drivers Enabled 00:02:53.269 =============== 00:02:53.269 00:02:53.269 common: 00:02:53.269 00:02:53.269 bus: 00:02:53.269 pci, vdev, 00:02:53.269 mempool: 00:02:53.269 ring, 00:02:53.269 dma: 00:02:53.269 00:02:53.269 net: 00:02:53.269 i40e, 00:02:53.269 raw: 00:02:53.269 00:02:53.269 crypto: 00:02:53.269 00:02:53.269 compress: 00:02:53.269 00:02:53.269 regex: 00:02:53.269 00:02:53.269 ml: 00:02:53.269 00:02:53.269 vdpa: 00:02:53.269 00:02:53.269 event: 00:02:53.269 00:02:53.269 baseband: 00:02:53.269 00:02:53.269 gpu: 00:02:53.269 00:02:53.269 power: 00:02:53.269 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:53.269 00:02:53.269 Message: 00:02:53.269 ================= 00:02:53.269 Content Skipped 00:02:53.269 ================= 00:02:53.269 00:02:53.269 apps: 00:02:53.269 00:02:53.269 libs: 00:02:53.269 00:02:53.269 drivers: 00:02:53.269 common/cpt: not in enabled drivers build config 00:02:53.269 common/dpaax: not in enabled drivers build config 00:02:53.269 common/iavf: not in enabled drivers build config 00:02:53.269 common/idpf: not in enabled drivers build config 00:02:53.269 common/ionic: not in enabled drivers build config 00:02:53.269 common/mvep: not in enabled drivers build config 00:02:53.269 common/octeontx: not in enabled drivers build config 00:02:53.269 bus/auxiliary: not in enabled drivers build config 00:02:53.269 bus/cdx: not in enabled drivers build config 00:02:53.269 bus/dpaa: not in enabled drivers build config 00:02:53.269 bus/fslmc: not in enabled drivers build config 00:02:53.269 bus/ifpga: not in enabled drivers build config 00:02:53.269 bus/platform: not in enabled drivers build config 00:02:53.269 bus/uacce: not in enabled drivers build config 00:02:53.269 bus/vmbus: not in enabled drivers build config 00:02:53.269 common/cnxk: not in enabled drivers build config 00:02:53.269 common/mlx5: not in enabled drivers build config 00:02:53.269 common/nfp: not in enabled drivers build config 00:02:53.269 common/nitrox: not in enabled drivers build config 00:02:53.269 common/qat: not in enabled drivers build config 00:02:53.269 common/sfc_efx: not in enabled drivers build config 00:02:53.269 mempool/bucket: not in enabled drivers build config 00:02:53.269 mempool/cnxk: not in enabled drivers build config 00:02:53.269 mempool/dpaa: not in enabled drivers build config 00:02:53.269 mempool/dpaa2: not in enabled drivers build config 00:02:53.269 mempool/octeontx: not in enabled drivers build config 00:02:53.269 mempool/stack: not in enabled drivers build config 00:02:53.269 dma/cnxk: not in enabled drivers build config 00:02:53.269 dma/dpaa: not in enabled drivers build config 00:02:53.269 dma/dpaa2: not in enabled drivers build config 00:02:53.269 dma/hisilicon: not in enabled drivers build config 00:02:53.269 dma/idxd: not in enabled drivers build config 00:02:53.269 dma/ioat: not in enabled drivers build config 00:02:53.269 dma/odm: not in enabled drivers build config 00:02:53.269 dma/skeleton: not in enabled drivers build config 00:02:53.269 net/af_packet: not in enabled drivers build config 00:02:53.269 net/af_xdp: not in enabled drivers build config 00:02:53.269 net/ark: not in enabled drivers build config 00:02:53.269 net/atlantic: not in enabled drivers build config 00:02:53.269 net/avp: not in enabled drivers build config 00:02:53.269 net/axgbe: not in enabled drivers build config 00:02:53.269 net/bnx2x: not in enabled drivers build config 00:02:53.269 net/bnxt: not in enabled drivers build config 00:02:53.269 net/bonding: not in enabled drivers build config 00:02:53.269 net/cnxk: not in enabled drivers build config 00:02:53.269 net/cpfl: not in enabled drivers build config 00:02:53.269 net/cxgbe: not in enabled drivers build config 00:02:53.269 net/dpaa: not in enabled drivers build config 00:02:53.269 net/dpaa2: not in enabled drivers build config 00:02:53.269 net/e1000: not in enabled drivers build config 00:02:53.269 net/ena: not in enabled drivers build config 00:02:53.269 net/enetc: not in enabled drivers build config 00:02:53.269 net/enetfec: not in enabled drivers build config 00:02:53.269 net/enic: not in enabled drivers build config 00:02:53.269 net/failsafe: not in enabled drivers build config 00:02:53.269 net/fm10k: not in enabled drivers build config 00:02:53.269 net/gve: not in enabled drivers build config 00:02:53.269 net/hinic: not in enabled drivers build config 00:02:53.269 net/hns3: not in enabled drivers build config 00:02:53.269 net/iavf: not in enabled drivers build config 00:02:53.269 net/ice: not in enabled drivers build config 00:02:53.269 net/idpf: not in enabled drivers build config 00:02:53.269 net/igc: not in enabled drivers build config 00:02:53.269 net/ionic: not in enabled drivers build config 00:02:53.269 net/ipn3ke: not in enabled drivers build config 00:02:53.269 net/ixgbe: not in enabled drivers build config 00:02:53.269 net/mana: not in enabled drivers build config 00:02:53.269 net/memif: not in enabled drivers build config 00:02:53.269 net/mlx4: not in enabled drivers build config 00:02:53.269 net/mlx5: not in enabled drivers build config 00:02:53.269 net/mvneta: not in enabled drivers build config 00:02:53.269 net/mvpp2: not in enabled drivers build config 00:02:53.269 net/netvsc: not in enabled drivers build config 00:02:53.269 net/nfb: not in enabled drivers build config 00:02:53.269 net/nfp: not in enabled drivers build config 00:02:53.269 net/ngbe: not in enabled drivers build config 00:02:53.269 net/ntnic: not in enabled drivers build config 00:02:53.269 net/null: not in enabled drivers build config 00:02:53.269 net/octeontx: not in enabled drivers build config 00:02:53.270 net/octeon_ep: not in enabled drivers build config 00:02:53.270 net/pcap: not in enabled drivers build config 00:02:53.270 net/pfe: not in enabled drivers build config 00:02:53.270 net/qede: not in enabled drivers build config 00:02:53.270 net/r8169: not in enabled drivers build config 00:02:53.270 net/ring: not in enabled drivers build config 00:02:53.270 net/sfc: not in enabled drivers build config 00:02:53.270 net/softnic: not in enabled drivers build config 00:02:53.270 net/tap: not in enabled drivers build config 00:02:53.270 net/thunderx: not in enabled drivers build config 00:02:53.270 net/txgbe: not in enabled drivers build config 00:02:53.270 net/vdev_netvsc: not in enabled drivers build config 00:02:53.270 net/vhost: not in enabled drivers build config 00:02:53.270 net/virtio: not in enabled drivers build config 00:02:53.270 net/vmxnet3: not in enabled drivers build config 00:02:53.270 net/zxdh: not in enabled drivers build config 00:02:53.270 raw/cnxk_bphy: not in enabled drivers build config 00:02:53.270 raw/cnxk_gpio: not in enabled drivers build config 00:02:53.270 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:53.270 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:53.270 raw/gdtc: not in enabled drivers build config 00:02:53.270 raw/ifpga: not in enabled drivers build config 00:02:53.270 raw/ntb: not in enabled drivers build config 00:02:53.270 raw/skeleton: not in enabled drivers build config 00:02:53.270 crypto/armv8: not in enabled drivers build config 00:02:53.270 crypto/bcmfs: not in enabled drivers build config 00:02:53.270 crypto/caam_jr: not in enabled drivers build config 00:02:53.270 crypto/ccp: not in enabled drivers build config 00:02:53.270 crypto/cnxk: not in enabled drivers build config 00:02:53.270 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.270 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.270 crypto/ionic: not in enabled drivers build config 00:02:53.270 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.270 crypto/mlx5: not in enabled drivers build config 00:02:53.270 crypto/mvsam: not in enabled drivers build config 00:02:53.270 crypto/nitrox: not in enabled drivers build config 00:02:53.270 crypto/null: not in enabled drivers build config 00:02:53.270 crypto/octeontx: not in enabled drivers build config 00:02:53.270 crypto/openssl: not in enabled drivers build config 00:02:53.270 crypto/scheduler: not in enabled drivers build config 00:02:53.270 crypto/uadk: not in enabled drivers build config 00:02:53.270 crypto/virtio: not in enabled drivers build config 00:02:53.270 compress/isal: not in enabled drivers build config 00:02:53.270 compress/mlx5: not in enabled drivers build config 00:02:53.270 compress/nitrox: not in enabled drivers build config 00:02:53.270 compress/octeontx: not in enabled drivers build config 00:02:53.270 compress/uadk: not in enabled drivers build config 00:02:53.270 compress/zlib: not in enabled drivers build config 00:02:53.270 regex/mlx5: not in enabled drivers build config 00:02:53.270 regex/cn9k: not in enabled drivers build config 00:02:53.270 ml/cnxk: not in enabled drivers build config 00:02:53.270 vdpa/ifc: not in enabled drivers build config 00:02:53.270 vdpa/mlx5: not in enabled drivers build config 00:02:53.270 vdpa/nfp: not in enabled drivers build config 00:02:53.270 vdpa/sfc: not in enabled drivers build config 00:02:53.270 event/cnxk: not in enabled drivers build config 00:02:53.270 event/dlb2: not in enabled drivers build config 00:02:53.270 event/dpaa: not in enabled drivers build config 00:02:53.270 event/dpaa2: not in enabled drivers build config 00:02:53.270 event/dsw: not in enabled drivers build config 00:02:53.270 event/opdl: not in enabled drivers build config 00:02:53.270 event/skeleton: not in enabled drivers build config 00:02:53.270 event/sw: not in enabled drivers build config 00:02:53.270 event/octeontx: not in enabled drivers build config 00:02:53.270 baseband/acc: not in enabled drivers build config 00:02:53.270 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:53.270 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:53.270 baseband/la12xx: not in enabled drivers build config 00:02:53.270 baseband/null: not in enabled drivers build config 00:02:53.270 baseband/turbo_sw: not in enabled drivers build config 00:02:53.270 gpu/cuda: not in enabled drivers build config 00:02:53.270 power/amd_uncore: not in enabled drivers build config 00:02:53.270 00:02:53.270 00:02:53.270 Message: DPDK build config complete: 00:02:53.270 source path = "/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk" 00:02:53.270 build path = "/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp" 00:02:53.270 Build targets in project: 244 00:02:53.270 00:02:53.270 DPDK 24.11.0-rc4 00:02:53.270 00:02:53.270 User defined options 00:02:53.270 libdir : lib 00:02:53.270 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:53.270 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:53.270 c_link_args : 00:02:53.270 enable_docs : false 00:02:53.531 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:53.531 enable_kmods : false 00:02:53.531 machine : native 00:02:53.531 tests : false 00:02:53.531 00:02:53.531 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:53.531 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:53.793 12:33:23 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:02:53.793 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:54.063 [1/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.063 [2/764] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:54.063 [3/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:54.063 [4/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.063 [5/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.334 [6/764] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:54.334 [7/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:54.334 [8/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.334 [9/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.334 [10/764] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:54.334 [11/764] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:54.334 [12/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.334 [13/764] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.334 [14/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.334 [15/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.334 [16/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:54.334 [17/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.334 [18/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.334 [19/764] Linking static target lib/librte_kvargs.a 00:02:54.596 [20/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.596 [21/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.596 [22/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:54.596 [23/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:54.596 [24/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.596 [25/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.596 [26/764] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.596 [27/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.596 [28/764] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:54.596 [29/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.596 [30/764] Linking static target lib/librte_pci.a 00:02:54.596 [31/764] Linking static target lib/librte_log.a 00:02:54.596 [32/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.855 [33/764] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:54.855 [34/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.855 [35/764] Linking static target lib/librte_argparse.a 00:02:54.855 [36/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:54.855 [37/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.135 [38/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.135 [39/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.135 [40/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.135 [41/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.135 [42/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.135 [43/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.135 [44/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:55.135 [45/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:55.135 [46/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:55.135 [47/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.135 [48/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:55.135 [49/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.135 [50/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.135 [51/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.135 [52/764] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:55.135 [53/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.135 [54/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:55.135 [55/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.135 [56/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.135 [57/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.135 [58/764] Linking static target lib/librte_cfgfile.a 00:02:55.135 [59/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.135 [60/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.135 [61/764] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:55.135 [62/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.135 [63/764] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:55.135 [64/764] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.135 [65/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:55.135 [66/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:55.135 [67/764] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.135 [68/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:55.135 [69/764] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.135 [70/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:55.135 [71/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:55.135 [72/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:55.135 [73/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.135 [74/764] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.135 [75/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:55.135 [76/764] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.135 [77/764] Linking static target lib/librte_meter.a 00:02:55.135 [78/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:55.135 [79/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:55.135 [80/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.135 [81/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.135 [82/764] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:55.400 [83/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.400 [84/764] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:55.400 [85/764] Linking static target lib/librte_ring.a 00:02:55.400 [86/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.400 [87/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.400 [88/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:55.400 [89/764] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:55.400 [90/764] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.400 [91/764] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.400 [92/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.400 [93/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.400 [94/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.400 [95/764] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:55.400 [96/764] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.400 [97/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.400 [98/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.400 [99/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:55.400 [100/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:55.400 [101/764] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:55.400 [102/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:55.400 [103/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:55.400 [104/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:55.400 [105/764] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:55.400 [106/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:55.400 [107/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:55.400 [108/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:55.400 [109/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:55.400 [110/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:55.400 [111/764] Linking static target lib/librte_cmdline.a 00:02:55.400 [112/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:55.400 [113/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:55.400 [114/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.400 [115/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:55.400 [116/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:55.400 [117/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:55.400 [118/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.400 [119/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:55.400 [120/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.660 [121/764] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:55.660 [122/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.660 [123/764] Linking static target lib/librte_metrics.a 00:02:55.660 [124/764] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:55.660 [125/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:55.660 [126/764] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:55.660 [127/764] Linking static target lib/librte_bitratestats.a 00:02:55.660 [128/764] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:02:55.660 [129/764] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:55.660 [130/764] Linking static target lib/librte_net.a 00:02:55.660 [131/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:55.660 [132/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:55.660 [133/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:55.660 [134/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:55.660 [135/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:55.660 [136/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:55.660 [137/764] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:55.660 [138/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.660 [139/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:55.660 [140/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.660 [141/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:55.660 [142/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.660 [143/764] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:55.660 [144/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:55.660 [145/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:55.660 [146/764] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:55.660 [147/764] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.660 [148/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:55.660 [149/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:55.660 [150/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.660 [151/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:55.660 [152/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:55.660 [153/764] Linking static target lib/librte_timer.a 00:02:55.660 [154/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:55.660 [155/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.660 [156/764] Linking static target lib/librte_compressdev.a 00:02:55.660 [157/764] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.660 [158/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:55.922 [159/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.922 [160/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.922 [161/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:55.922 [162/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:55.922 [163/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:55.922 [164/764] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.922 [165/764] Linking static target lib/librte_mempool.a 00:02:55.922 [166/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.922 [167/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.922 [168/764] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.922 [169/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.922 [170/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:55.922 [171/764] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:55.922 [172/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:55.922 [173/764] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.922 [174/764] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.922 [175/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:55.922 [176/764] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:55.922 [177/764] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:55.922 [178/764] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:55.922 [179/764] Linking static target lib/librte_jobstats.a 00:02:55.922 [180/764] Linking static target lib/librte_bbdev.a 00:02:55.922 [181/764] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.922 [182/764] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.922 [183/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:55.922 [184/764] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:55.922 [185/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:55.922 [186/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:55.922 [187/764] Linking target lib/librte_log.so.25.0 00:02:55.922 [188/764] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:55.922 [189/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.922 [190/764] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:02:56.181 [191/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:56.181 [192/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:56.181 [193/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:56.181 [194/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:56.181 [195/764] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:56.181 [196/764] Linking static target lib/librte_stack.a 00:02:56.181 [197/764] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:56.181 [198/764] Linking static target lib/librte_distributor.a 00:02:56.181 [199/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:56.181 [200/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:56.181 [201/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:56.181 [202/764] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:56.181 [203/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.181 [204/764] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:56.181 [205/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.181 [206/764] Linking static target lib/librte_dmadev.a 00:02:56.181 [207/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.181 [208/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:56.181 [209/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:56.181 [210/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:56.181 [211/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:56.181 [212/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:56.181 [213/764] Linking static target lib/librte_telemetry.a 00:02:56.181 [214/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:56.181 [215/764] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.181 [216/764] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:56.181 [217/764] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:56.181 [218/764] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:56.181 [219/764] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:56.181 [220/764] Linking static target lib/librte_latencystats.a 00:02:56.181 [221/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:56.181 [222/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:56.181 [223/764] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:56.181 [224/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:56.181 [225/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:56.181 [226/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:56.181 [227/764] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:56.181 [228/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:56.181 [229/764] Linking static target lib/librte_regexdev.a 00:02:56.181 [230/764] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.181 [231/764] Linking static target lib/librte_rawdev.a 00:02:56.181 [232/764] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:56.181 [233/764] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:56.181 [234/764] Linking static target lib/librte_rcu.a 00:02:56.181 [235/764] Linking target lib/librte_kvargs.so.25.0 00:02:56.181 [236/764] Linking static target lib/librte_gpudev.a 00:02:56.181 [237/764] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:56.181 [238/764] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.181 [239/764] Linking static target lib/librte_power.a 00:02:56.181 [240/764] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:56.181 [241/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:56.181 [242/764] Linking target lib/librte_argparse.so.25.0 00:02:56.181 [243/764] Linking static target lib/librte_eal.a 00:02:56.181 [244/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:56.444 [245/764] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:56.444 [246/764] Linking static target lib/librte_dispatcher.a 00:02:56.444 [247/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:56.444 [248/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:56.444 [249/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:56.444 [250/764] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:56.444 [251/764] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.444 [252/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:56.444 [253/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:56.444 [254/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:56.444 [255/764] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.444 [256/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:56.444 [257/764] Linking static target lib/librte_reorder.a 00:02:56.444 [258/764] Linking static target lib/librte_gro.a 00:02:56.444 [259/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.444 [260/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.444 [261/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:56.444 [262/764] Linking static target lib/librte_mbuf.a 00:02:56.444 [263/764] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.444 [264/764] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:56.444 [265/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.444 [266/764] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:56.444 [267/764] Linking static target lib/librte_security.a 00:02:56.444 [268/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.444 [269/764] Linking static target lib/librte_gso.a 00:02:56.444 [270/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:56.444 [271/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:56.444 [272/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.444 [273/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:56.444 [274/764] Linking static target lib/librte_mldev.a 00:02:56.444 [275/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:56.444 [276/764] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:56.444 [277/764] Linking static target lib/librte_rib.a 00:02:56.444 [278/764] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:56.444 [279/764] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:56.444 [280/764] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:56.444 [281/764] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:56.444 [282/764] Linking static target lib/librte_pcapng.a 00:02:56.444 [283/764] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.444 [284/764] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.444 [285/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:56.444 [286/764] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:56.444 [287/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:56.444 [288/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:56.444 [289/764] Linking static target lib/librte_ip_frag.a 00:02:56.444 [290/764] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.444 [291/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:56.444 [292/764] Linking static target lib/librte_bpf.a 00:02:56.707 [293/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:56.707 [294/764] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:56.707 [295/764] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.707 [296/764] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.707 [297/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:56.707 [298/764] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:56.707 [299/764] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:56.707 [300/764] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:56.707 [301/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:56.707 [302/764] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:56.707 [303/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:56.707 [304/764] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:56.707 [305/764] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:56.707 [306/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:56.708 [307/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:56.708 [308/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.708 [309/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:56.708 [310/764] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.708 [311/764] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.708 [312/764] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:56.708 [313/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:02:56.708 [314/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:56.708 [315/764] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:56.708 [316/764] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.708 [317/764] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:56.708 [318/764] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.708 [319/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:02:56.708 [320/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:56.708 [321/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:56.708 [322/764] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.708 [323/764] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.708 [324/764] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.708 [325/764] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:56.708 [326/764] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:56.708 [327/764] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:02:56.969 [328/764] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:56.969 [329/764] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:56.969 [330/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:56.969 [331/764] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.969 [332/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:56.969 [333/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:56.969 [334/764] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [335/764] Linking static target lib/librte_lpm.a 00:02:56.969 [336/764] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.969 [337/764] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:56.969 [338/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:56.969 [339/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.969 [340/764] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [341/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:56.969 [342/764] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:56.969 [343/764] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:56.969 [344/764] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.969 [345/764] Linking static target lib/librte_efd.a 00:02:56.969 [346/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.969 [347/764] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:56.969 [348/764] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:56.969 [349/764] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [350/764] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:56.969 [351/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:56.969 [352/764] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:56.969 [353/764] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [354/764] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [355/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:56.969 [356/764] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.969 [357/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:56.969 [358/764] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [359/764] Linking target lib/librte_telemetry.so.25.0 00:02:56.969 [360/764] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.969 [361/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.969 [362/764] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:56.969 [363/764] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:56.969 [364/764] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.969 [365/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.969 [366/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:56.969 [367/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:56.969 [368/764] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.969 [369/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.229 [370/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.229 [371/764] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.229 [372/764] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:57.229 [373/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:57.230 [374/764] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.230 [375/764] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.230 [376/764] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:02:57.230 [377/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.230 [378/764] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:57.230 [379/764] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.230 [380/764] Linking static target drivers/librte_power_kvm_vm.a 00:02:57.230 [381/764] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:57.230 [382/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:57.230 [383/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:57.230 [384/764] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:57.230 [385/764] Linking static target lib/librte_graph.a 00:02:57.230 [386/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:57.230 [387/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.230 [388/764] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:57.230 [389/764] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:57.230 [390/764] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.230 [391/764] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:57.230 [392/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:57.230 [393/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.230 [394/764] Linking static target lib/librte_pdump.a 00:02:57.230 [395/764] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:02:57.230 [396/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.230 [397/764] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.230 [398/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:57.230 [399/764] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:02:57.230 [400/764] Linking static target drivers/libtmp_rte_power_acpi.a 00:02:57.230 [401/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:57.230 [402/764] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:02:57.490 [403/764] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:02:57.490 [404/764] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.490 [405/764] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:02:57.490 [406/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:57.490 [407/764] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.490 [408/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:57.490 [409/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:57.490 [410/764] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.490 [411/764] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.491 [412/764] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:57.491 [413/764] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:02:57.491 [414/764] Linking static target drivers/librte_bus_vdev.a 00:02:57.491 [415/764] Linking static target drivers/libtmp_rte_power_cppc.a 00:02:57.491 [416/764] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:57.491 [417/764] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.491 [418/764] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.491 [419/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.491 [420/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:57.491 [421/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.491 [422/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:57.491 [423/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:57.491 [424/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:57.491 [425/764] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.491 [426/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:57.491 [427/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.491 [428/764] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.491 [429/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:57.491 [430/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:57.491 [431/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:57.491 [432/764] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.491 [433/764] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:57.491 [434/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:57.491 [435/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:57.749 [436/764] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:57.749 [437/764] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:57.749 [438/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:57.749 [439/764] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:02:57.749 [440/764] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:57.750 [441/764] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:02:57.750 [442/764] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:57.750 [443/764] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:57.750 [444/764] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:02:57.750 [445/764] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:57.750 [446/764] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:57.750 [447/764] Linking static target lib/librte_table.a 00:02:57.750 [448/764] Linking static target lib/librte_sched.a 00:02:57.750 [449/764] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.750 [450/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:57.750 [451/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:57.750 [452/764] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:02:57.750 [453/764] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:57.750 [454/764] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:57.750 [455/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:57.750 [456/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:57.750 [457/764] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.750 [458/764] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:57.750 [459/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:57.750 [460/764] Linking static target drivers/librte_power_acpi.a 00:02:57.750 [461/764] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.750 [462/764] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:57.750 [463/764] Linking static target lib/librte_fib.a 00:02:57.750 [464/764] Linking static target drivers/librte_bus_pci.a 00:02:57.750 [465/764] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:57.750 [466/764] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:57.750 [467/764] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:57.750 [468/764] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:57.750 [469/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:57.750 [470/764] Linking static target drivers/librte_power_amd_pstate.a 00:02:57.750 [471/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:57.750 [472/764] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:02:57.750 [473/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:57.750 [474/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:57.750 [475/764] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:02:57.750 [476/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:57.750 [477/764] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.750 [478/764] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.750 [479/764] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:57.750 [480/764] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:57.750 [481/764] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.750 [482/764] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:57.750 [483/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:57.750 [484/764] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:57.750 [485/764] Linking static target drivers/librte_power_cppc.a 00:02:57.750 [486/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:57.750 [487/764] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:57.750 [488/764] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:57.750 [489/764] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.750 [490/764] Linking static target drivers/librte_power_intel_uncore.a 00:02:57.750 [491/764] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.750 [492/764] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:57.750 [493/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:57.750 [494/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:57.750 [495/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:57.750 [496/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:57.750 [497/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:57.750 [498/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:57.750 [499/764] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.750 [500/764] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:57.750 [501/764] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:57.750 [502/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:57.750 [503/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:57.750 [504/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:57.750 [505/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:58.009 [506/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.009 [507/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:58.009 [508/764] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:58.009 [509/764] Linking static target lib/librte_cryptodev.a 00:02:58.009 [510/764] Linking static target lib/librte_node.a 00:02:58.009 [511/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:58.009 [512/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:58.009 [513/764] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:58.009 [514/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:58.009 [515/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:58.009 [516/764] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:02:58.009 [517/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:58.009 [518/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:58.009 [519/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:58.009 [520/764] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:58.009 [521/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:58.009 [522/764] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:58.009 [523/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:58.009 [524/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:58.009 [525/764] Linking static target drivers/librte_power_intel_pstate.a 00:02:58.009 [526/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:58.009 [527/764] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.009 [528/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:58.009 [529/764] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.009 [530/764] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.009 [531/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:58.009 [532/764] Linking static target drivers/librte_mempool_ring.a 00:02:58.009 [533/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:58.009 [534/764] Linking static target lib/librte_member.a 00:02:58.009 [535/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:58.009 [536/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:58.009 [537/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:58.009 [538/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:58.009 [539/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:58.009 [540/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:58.009 [541/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:58.009 [542/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:58.009 [543/764] Linking static target lib/librte_pdcp.a 00:02:58.009 [544/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:58.009 [545/764] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.009 [546/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:58.009 [547/764] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:58.009 [548/764] Linking static target lib/librte_port.a 00:02:58.009 [549/764] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.009 [550/764] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:58.009 [551/764] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.009 [552/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:58.009 [553/764] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.290 [554/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:58.290 [555/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:58.290 [556/764] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.290 [557/764] Linking static target lib/librte_ipsec.a 00:02:58.290 [558/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:58.290 [559/764] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:58.290 [560/764] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:58.290 [561/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:58.290 [562/764] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:58.290 [563/764] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:58.290 [564/764] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:58.290 [565/764] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:58.290 [566/764] Linking static target lib/acl/libavx2_tmp.a 00:02:58.290 [567/764] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.290 [568/764] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:58.290 [569/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:58.290 [570/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:58.290 [571/764] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:58.290 [572/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:58.290 [573/764] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:58.290 [574/764] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.290 [575/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:58.290 [576/764] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:02:58.290 [577/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:58.290 [578/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:58.290 [579/764] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.290 [580/764] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:58.290 [581/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:58.290 [582/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:58.290 [583/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:58.290 [584/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:58.290 [585/764] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.290 [586/764] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.290 [587/764] Linking static target lib/librte_hash.a 00:02:58.551 [588/764] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:58.551 [589/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:58.551 [590/764] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.551 [591/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:58.551 [592/764] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.551 [593/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:58.551 [594/764] Linking static target lib/librte_eventdev.a 00:02:58.551 [595/764] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:58.551 [596/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:58.551 [597/764] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:58.551 [598/764] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:58.551 [599/764] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:58.551 [600/764] Linking static target lib/librte_acl.a 00:02:58.551 [601/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.551 [602/764] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.551 [603/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:58.551 [604/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:58.551 [605/764] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:58.551 [606/764] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:58.551 [607/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.551 [608/764] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:58.551 [609/764] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.551 [610/764] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.551 [611/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:58.812 [612/764] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.812 [613/764] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:58.812 [614/764] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:58.812 [615/764] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.812 [616/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:58.812 [617/764] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.073 [618/764] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.073 [619/764] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:59.073 [620/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:59.073 [621/764] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:59.334 [622/764] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:59.334 [623/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.595 [624/764] Linking static target lib/librte_ethdev.a 00:02:59.595 [625/764] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.595 [626/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:59.856 [627/764] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:59.856 [628/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:59.856 [629/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:00.116 [630/764] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.378 [631/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:00.378 [632/764] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:00.378 [633/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:00.639 [634/764] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:00.639 [635/764] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:00.639 [636/764] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:00.639 [637/764] Linking static target drivers/librte_net_i40e.a 00:03:00.900 [638/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.844 [639/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:01.844 [640/764] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.844 [641/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:02.416 [642/764] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.716 [643/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.716 [644/764] Linking static target lib/librte_pipeline.a 00:03:07.630 [645/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.630 [646/764] Linking static target lib/librte_vhost.a 00:03:07.889 [647/764] Linking target app/dpdk-proc-info 00:03:07.889 [648/764] Linking target app/dpdk-test-dma-perf 00:03:07.889 [649/764] Linking target app/dpdk-test-acl 00:03:07.889 [650/764] Linking target app/dpdk-test-regex 00:03:07.889 [651/764] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.889 [652/764] Linking target app/dpdk-test-mldev 00:03:08.150 [653/764] Linking target app/dpdk-test-fib 00:03:08.150 [654/764] Linking target app/dpdk-dumpcap 00:03:08.150 [655/764] Linking target app/dpdk-test-gpudev 00:03:08.150 [656/764] Linking target app/dpdk-graph 00:03:08.150 [657/764] Linking target lib/librte_eal.so.25.0 00:03:08.150 [658/764] Linking target app/dpdk-test-compress-perf 00:03:08.150 [659/764] Linking target app/dpdk-test-flow-perf 00:03:08.150 [660/764] Linking target app/dpdk-test-cmdline 00:03:08.150 [661/764] Linking target app/dpdk-pdump 00:03:08.150 [662/764] Linking target app/dpdk-test-sad 00:03:08.150 [663/764] Linking target app/dpdk-test-bbdev 00:03:08.150 [664/764] Linking target app/dpdk-test-pipeline 00:03:08.150 [665/764] Linking target app/dpdk-test-crypto-perf 00:03:08.150 [666/764] Linking target app/dpdk-test-security-perf 00:03:08.150 [667/764] Linking target app/dpdk-test-eventdev 00:03:08.150 [668/764] Linking target app/dpdk-testpmd 00:03:08.150 [669/764] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:08.150 [670/764] Linking target lib/librte_meter.so.25.0 00:03:08.150 [671/764] Linking target lib/librte_timer.so.25.0 00:03:08.150 [672/764] Linking target lib/librte_ring.so.25.0 00:03:08.150 [673/764] Linking target lib/librte_pci.so.25.0 00:03:08.150 [674/764] Linking target lib/librte_dmadev.so.25.0 00:03:08.150 [675/764] Linking target lib/librte_cfgfile.so.25.0 00:03:08.150 [676/764] Linking target lib/librte_jobstats.so.25.0 00:03:08.150 [677/764] Linking target lib/librte_stack.so.25.0 00:03:08.150 [678/764] Linking target lib/librte_rawdev.so.25.0 00:03:08.150 [679/764] Linking target drivers/librte_bus_vdev.so.25.0 00:03:08.150 [680/764] Linking target lib/librte_acl.so.25.0 00:03:08.412 [681/764] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:08.412 [682/764] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:08.412 [683/764] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:08.412 [684/764] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:08.412 [685/764] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:08.412 [686/764] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:08.412 [687/764] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:08.412 [688/764] Linking target lib/librte_rcu.so.25.0 00:03:08.412 [689/764] Linking target lib/librte_mempool.so.25.0 00:03:08.412 [690/764] Linking target drivers/librte_bus_pci.so.25.0 00:03:08.672 [691/764] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:08.672 [692/764] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:08.672 [693/764] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:08.672 [694/764] Linking target drivers/librte_mempool_ring.so.25.0 00:03:08.672 [695/764] Linking target lib/librte_mbuf.so.25.0 00:03:08.672 [696/764] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:08.932 [697/764] Linking target lib/librte_gpudev.so.25.0 00:03:08.932 [698/764] Linking target lib/librte_net.so.25.0 00:03:08.932 [699/764] Linking target lib/librte_compressdev.so.25.0 00:03:08.932 [700/764] Linking target lib/librte_reorder.so.25.0 00:03:08.932 [701/764] Linking target lib/librte_bbdev.so.25.0 00:03:08.932 [702/764] Linking target lib/librte_regexdev.so.25.0 00:03:08.932 [703/764] Linking target lib/librte_distributor.so.25.0 00:03:08.932 [704/764] Linking target lib/librte_cryptodev.so.25.0 00:03:08.932 [705/764] Linking target lib/librte_mldev.so.25.0 00:03:08.932 [706/764] Linking target lib/librte_sched.so.25.0 00:03:08.932 [707/764] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:08.932 [708/764] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:08.932 [709/764] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:08.932 [710/764] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:08.932 [711/764] Linking target lib/librte_rib.so.25.0 00:03:08.932 [712/764] Linking target lib/librte_cmdline.so.25.0 00:03:08.932 [713/764] Linking target lib/librte_hash.so.25.0 00:03:08.932 [714/764] Linking target lib/librte_security.so.25.0 00:03:09.193 [715/764] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.193 [716/764] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:09.193 [717/764] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:09.193 [718/764] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:09.193 [719/764] Linking target lib/librte_ethdev.so.25.0 00:03:09.193 [720/764] Linking target lib/librte_efd.so.25.0 00:03:09.193 [721/764] Linking target lib/librte_fib.so.25.0 00:03:09.193 [722/764] Linking target lib/librte_lpm.so.25.0 00:03:09.193 [723/764] Linking target lib/librte_pdcp.so.25.0 00:03:09.193 [724/764] Linking target lib/librte_member.so.25.0 00:03:09.193 [725/764] Linking target lib/librte_ipsec.so.25.0 00:03:09.453 [726/764] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:09.453 [727/764] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:09.453 [728/764] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:09.453 [729/764] Linking target lib/librte_metrics.so.25.0 00:03:09.453 [730/764] Linking target lib/librte_gso.so.25.0 00:03:09.453 [731/764] Linking target lib/librte_pcapng.so.25.0 00:03:09.453 [732/764] Linking target lib/librte_gro.so.25.0 00:03:09.453 [733/764] Linking target lib/librte_ip_frag.so.25.0 00:03:09.453 [734/764] Linking target lib/librte_power.so.25.0 00:03:09.453 [735/764] Linking target lib/librte_bpf.so.25.0 00:03:09.453 [736/764] Linking target lib/librte_eventdev.so.25.0 00:03:09.453 [737/764] Linking target drivers/librte_net_i40e.so.25.0 00:03:09.453 [738/764] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:09.453 [739/764] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:09.453 [740/764] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:09.453 [741/764] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:09.453 [742/764] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:09.453 [743/764] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:09.713 [744/764] Linking target lib/librte_bitratestats.so.25.0 00:03:09.713 [745/764] Linking target lib/librte_latencystats.so.25.0 00:03:09.713 [746/764] Linking target lib/librte_pdump.so.25.0 00:03:09.713 [747/764] Linking target drivers/librte_power_acpi.so.25.0 00:03:09.713 [748/764] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:09.713 [749/764] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:09.713 [750/764] Linking target lib/librte_graph.so.25.0 00:03:09.713 [751/764] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:09.713 [752/764] Linking target drivers/librte_power_cppc.so.25.0 00:03:09.713 [753/764] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:09.713 [754/764] Linking target lib/librte_dispatcher.so.25.0 00:03:09.713 [755/764] Linking target lib/librte_port.so.25.0 00:03:09.713 [756/764] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.713 [757/764] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:09.713 [758/764] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:09.713 [759/764] Linking target lib/librte_vhost.so.25.0 00:03:09.972 [760/764] Linking target lib/librte_node.so.25.0 00:03:09.972 [761/764] Linking target lib/librte_table.so.25.0 00:03:09.972 [762/764] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:11.355 [763/764] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.355 [764/764] Linking target lib/librte_pipeline.so.25.0 00:03:11.355 12:33:41 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:11.355 12:33:41 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.355 12:33:41 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:03:11.355 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:11.620 [0/1] Installing files. 00:03:11.620 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:03:11.620 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_eddsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_skeleton.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.621 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.622 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.623 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.624 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.625 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.890 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.890 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.890 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing lib/librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_power_acpi.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_power_acpi.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_power_amd_pstate.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_power_amd_pstate.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_power_cppc.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_power_cppc.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_power_intel_pstate.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_power_intel_pstate.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_power_intel_uncore.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_power_intel_uncore.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing drivers/librte_power_kvm_vm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.891 Installing drivers/librte_power_kvm_vm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.891 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.891 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitset.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore_var.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.892 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_cksum.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip4.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.893 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.894 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/power_cpufreq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/power_uncore_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_cpufreq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_qos.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.158 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:12.159 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:12.159 Installing symlink pointing to librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.25 00:03:12.159 Installing symlink pointing to librte_log.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:12.159 Installing symlink pointing to librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.25 00:03:12.159 Installing symlink pointing to librte_kvargs.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:12.159 Installing symlink pointing to librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.25 00:03:12.159 Installing symlink pointing to librte_argparse.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:03:12.159 Installing symlink pointing to librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.25 00:03:12.159 Installing symlink pointing to librte_telemetry.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:12.159 Installing symlink pointing to librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.25 00:03:12.159 Installing symlink pointing to librte_eal.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:12.159 Installing symlink pointing to librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.25 00:03:12.159 Installing symlink pointing to librte_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:12.159 Installing symlink pointing to librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.25 00:03:12.159 Installing symlink pointing to librte_rcu.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:12.159 Installing symlink pointing to librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.25 00:03:12.159 Installing symlink pointing to librte_mempool.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:12.159 Installing symlink pointing to librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.25 00:03:12.159 Installing symlink pointing to librte_mbuf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:12.159 Installing symlink pointing to librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.25 00:03:12.159 Installing symlink pointing to librte_net.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:12.159 Installing symlink pointing to librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.25 00:03:12.159 Installing symlink pointing to librte_meter.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:12.159 Installing symlink pointing to librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.25 00:03:12.159 Installing symlink pointing to librte_ethdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:12.159 Installing symlink pointing to librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.25 00:03:12.159 Installing symlink pointing to librte_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:12.159 Installing symlink pointing to librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.25 00:03:12.159 Installing symlink pointing to librte_cmdline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:12.159 Installing symlink pointing to librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.25 00:03:12.159 Installing symlink pointing to librte_metrics.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:12.160 Installing symlink pointing to librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.25 00:03:12.160 Installing symlink pointing to librte_hash.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:12.160 Installing symlink pointing to librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.25 00:03:12.160 Installing symlink pointing to librte_timer.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:12.160 Installing symlink pointing to librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.25 00:03:12.160 Installing symlink pointing to librte_acl.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:12.160 Installing symlink pointing to librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.25 00:03:12.160 Installing symlink pointing to librte_bbdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:12.160 Installing symlink pointing to librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.25 00:03:12.160 Installing symlink pointing to librte_bitratestats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:12.160 Installing symlink pointing to librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.25 00:03:12.160 Installing symlink pointing to librte_bpf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:12.160 Installing symlink pointing to librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.25 00:03:12.160 Installing symlink pointing to librte_cfgfile.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:12.160 Installing symlink pointing to librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.25 00:03:12.160 Installing symlink pointing to librte_compressdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:12.160 Installing symlink pointing to librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.25 00:03:12.160 Installing symlink pointing to librte_cryptodev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:12.160 Installing symlink pointing to librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.25 00:03:12.160 Installing symlink pointing to librte_distributor.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:12.160 Installing symlink pointing to librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.25 00:03:12.160 Installing symlink pointing to librte_dmadev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:12.160 Installing symlink pointing to librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.25 00:03:12.160 Installing symlink pointing to librte_efd.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:12.160 Installing symlink pointing to librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.25 00:03:12.160 Installing symlink pointing to librte_eventdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:12.160 Installing symlink pointing to librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.25 00:03:12.160 Installing symlink pointing to librte_dispatcher.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:12.160 Installing symlink pointing to librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.25 00:03:12.160 Installing symlink pointing to librte_gpudev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:12.160 Installing symlink pointing to librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.25 00:03:12.160 Installing symlink pointing to librte_gro.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:12.160 Installing symlink pointing to librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.25 00:03:12.160 Installing symlink pointing to librte_gso.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:12.160 Installing symlink pointing to librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.25 00:03:12.160 Installing symlink pointing to librte_ip_frag.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:12.160 Installing symlink pointing to librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.25 00:03:12.160 Installing symlink pointing to librte_jobstats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:12.160 Installing symlink pointing to librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.25 00:03:12.160 Installing symlink pointing to librte_latencystats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:12.160 Installing symlink pointing to librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.25 00:03:12.160 Installing symlink pointing to librte_lpm.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:12.160 Installing symlink pointing to librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.25 00:03:12.160 Installing symlink pointing to librte_member.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:12.160 Installing symlink pointing to librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.25 00:03:12.160 Installing symlink pointing to librte_pcapng.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:12.160 Installing symlink pointing to librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.25 00:03:12.160 Installing symlink pointing to librte_power.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:12.160 Installing symlink pointing to librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.25 00:03:12.160 Installing symlink pointing to librte_rawdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:12.160 Installing symlink pointing to librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.25 00:03:12.160 Installing symlink pointing to librte_regexdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:12.160 Installing symlink pointing to librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.25 00:03:12.160 Installing symlink pointing to librte_mldev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:12.160 Installing symlink pointing to librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.25 00:03:12.160 Installing symlink pointing to librte_rib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:12.160 Installing symlink pointing to librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.25 00:03:12.160 Installing symlink pointing to librte_reorder.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:12.160 Installing symlink pointing to librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.25 00:03:12.160 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:12.160 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:12.160 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:12.160 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:12.160 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:12.160 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:12.160 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:12.160 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:12.160 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:12.160 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:12.160 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:12.160 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:12.160 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:12.160 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:12.160 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:12.160 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:12.160 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:12.160 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:12.160 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:12.160 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:12.160 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:12.160 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:12.160 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:12.160 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:12.160 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:12.160 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:12.160 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:12.160 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:12.160 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:12.160 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:12.160 Installing symlink pointing to librte_sched.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:12.160 Installing symlink pointing to librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.25 00:03:12.160 Installing symlink pointing to librte_security.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:12.160 Installing symlink pointing to librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.25 00:03:12.160 Installing symlink pointing to librte_stack.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:12.160 Installing symlink pointing to librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.25 00:03:12.160 Installing symlink pointing to librte_vhost.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:12.160 Installing symlink pointing to librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.25 00:03:12.160 Installing symlink pointing to librte_ipsec.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:12.160 Installing symlink pointing to librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.25 00:03:12.160 Installing symlink pointing to librte_pdcp.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:12.160 Installing symlink pointing to librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.25 00:03:12.160 Installing symlink pointing to librte_fib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:12.160 Installing symlink pointing to librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.25 00:03:12.160 Installing symlink pointing to librte_port.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:12.161 Installing symlink pointing to librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.25 00:03:12.161 Installing symlink pointing to librte_pdump.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:12.161 Installing symlink pointing to librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.25 00:03:12.161 Installing symlink pointing to librte_table.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:12.161 Installing symlink pointing to librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.25 00:03:12.161 Installing symlink pointing to librte_pipeline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:12.161 Installing symlink pointing to librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.25 00:03:12.161 Installing symlink pointing to librte_graph.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:12.161 Installing symlink pointing to librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.25 00:03:12.161 Installing symlink pointing to librte_node.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:12.161 Installing symlink pointing to librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:12.161 Installing symlink pointing to librte_bus_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:12.161 Installing symlink pointing to librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:12.161 Installing symlink pointing to librte_bus_vdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:12.161 Installing symlink pointing to librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:12.161 Installing symlink pointing to librte_mempool_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:12.161 Installing symlink pointing to librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:12.161 Installing symlink pointing to librte_net_i40e.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:12.161 Installing symlink pointing to librte_power_acpi.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:12.161 Installing symlink pointing to librte_power_acpi.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:12.161 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:12.161 Installing symlink pointing to librte_power_amd_pstate.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:12.161 Installing symlink pointing to librte_power_cppc.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:12.161 Installing symlink pointing to librte_power_cppc.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:12.161 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:12.161 Installing symlink pointing to librte_power_intel_pstate.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:12.161 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:12.161 Installing symlink pointing to librte_power_intel_uncore.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:12.161 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:12.161 Installing symlink pointing to librte_power_kvm_vm.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:12.161 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:12.161 12:33:42 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:12.161 12:33:42 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.161 00:03:12.161 real 0m25.434s 00:03:12.161 user 7m45.252s 00:03:12.161 sys 3m57.804s 00:03:12.161 12:33:42 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:12.161 12:33:42 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:12.161 ************************************ 00:03:12.161 END TEST build_native_dpdk 00:03:12.161 ************************************ 00:03:12.161 12:33:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.161 12:33:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.161 12:33:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.161 12:33:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.161 12:33:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.161 12:33:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.161 12:33:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.161 12:33:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:12.161 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.422 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.422 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.422 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:12.994 Using 'verbs' RDMA provider 00:03:28.849 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:41.080 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:41.913 Creating mk/config.mk...done. 00:03:41.913 Creating mk/cc.flags.mk...done. 00:03:41.913 Type 'make' to build. 00:03:41.913 12:34:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:41.913 12:34:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:41.913 12:34:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:41.913 12:34:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.913 ************************************ 00:03:41.913 START TEST make 00:03:41.913 ************************************ 00:03:41.913 12:34:11 make -- common/autotest_common.sh@1129 -- $ make -j144 00:03:42.484 make[1]: Nothing to be done for 'all'. 00:03:43.870 The Meson build system 00:03:43.870 Version: 1.5.0 00:03:43.870 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:43.870 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:43.870 Build type: native build 00:03:43.870 Project name: libvfio-user 00:03:43.870 Project version: 0.0.1 00:03:43.870 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:43.870 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:43.870 Host machine cpu family: x86_64 00:03:43.870 Host machine cpu: x86_64 00:03:43.870 Run-time dependency threads found: YES 00:03:43.870 Library dl found: YES 00:03:43.870 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:43.870 Run-time dependency json-c found: YES 0.17 00:03:43.870 Run-time dependency cmocka found: YES 1.1.7 00:03:43.870 Program pytest-3 found: NO 00:03:43.870 Program flake8 found: NO 00:03:43.870 Program misspell-fixer found: NO 00:03:43.870 Program restructuredtext-lint found: NO 00:03:43.870 Program valgrind found: YES (/usr/bin/valgrind) 00:03:43.870 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:43.870 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:43.870 Compiler for C supports arguments -Wwrite-strings: YES 00:03:43.870 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:43.870 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:43.870 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:43.870 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:43.870 Build targets in project: 8 00:03:43.870 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:43.870 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:43.870 00:03:43.870 libvfio-user 0.0.1 00:03:43.870 00:03:43.870 User defined options 00:03:43.870 buildtype : debug 00:03:43.870 default_library: shared 00:03:43.870 libdir : /usr/local/lib 00:03:43.870 00:03:43.870 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:44.129 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:44.389 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:44.389 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:44.389 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:44.389 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:44.389 [5/37] Compiling C object samples/null.p/null.c.o 00:03:44.389 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:44.389 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:44.389 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:44.389 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:44.389 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:44.389 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:44.389 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:44.389 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:44.389 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:44.389 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:44.389 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:44.389 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:44.389 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:44.389 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:44.389 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:44.389 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:44.389 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:44.389 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:44.389 [24/37] Compiling C object samples/server.p/server.c.o 00:03:44.389 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:44.389 [26/37] Compiling C object samples/client.p/client.c.o 00:03:44.389 [27/37] Linking target samples/client 00:03:44.390 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:44.650 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:44.650 [30/37] Linking target test/unit_tests 00:03:44.650 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:44.650 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:44.650 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:44.650 [34/37] Linking target samples/null 00:03:44.650 [35/37] Linking target samples/server 00:03:44.650 [36/37] Linking target samples/gpio-pci-idio-16 00:03:44.650 [37/37] Linking target samples/lspci 00:03:44.910 INFO: autodetecting backend as ninja 00:03:44.910 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:44.910 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:45.174 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:45.174 ninja: no work to do. 00:04:07.329 CC lib/ut_mock/mock.o 00:04:07.329 CC lib/log/log.o 00:04:07.329 CC lib/ut/ut.o 00:04:07.329 CC lib/log/log_flags.o 00:04:07.329 CC lib/log/log_deprecated.o 00:04:07.329 LIB libspdk_ut.a 00:04:07.329 SO libspdk_ut.so.2.0 00:04:07.329 LIB libspdk_ut_mock.a 00:04:07.329 LIB libspdk_log.a 00:04:07.329 SO libspdk_ut_mock.so.6.0 00:04:07.329 SO libspdk_log.so.7.1 00:04:07.329 SYMLINK libspdk_ut.so 00:04:07.329 SYMLINK libspdk_ut_mock.so 00:04:07.329 SYMLINK libspdk_log.so 00:04:07.902 CC lib/util/base64.o 00:04:07.902 CC lib/dma/dma.o 00:04:07.902 CC lib/util/bit_array.o 00:04:07.902 CC lib/util/cpuset.o 00:04:07.902 CC lib/ioat/ioat.o 00:04:07.902 CXX lib/trace_parser/trace.o 00:04:07.902 CC lib/util/crc16.o 00:04:07.902 CC lib/util/crc32.o 00:04:07.902 CC lib/util/crc32c.o 00:04:07.902 CC lib/util/crc32_ieee.o 00:04:07.902 CC lib/util/crc64.o 00:04:07.902 CC lib/util/dif.o 00:04:07.902 CC lib/util/fd.o 00:04:07.902 CC lib/util/fd_group.o 00:04:07.902 CC lib/util/file.o 00:04:07.902 CC lib/util/hexlify.o 00:04:07.902 CC lib/util/iov.o 00:04:07.902 CC lib/util/math.o 00:04:07.902 CC lib/util/net.o 00:04:07.902 CC lib/util/pipe.o 00:04:07.902 CC lib/util/strerror_tls.o 00:04:07.902 CC lib/util/string.o 00:04:07.902 CC lib/util/uuid.o 00:04:07.902 CC lib/util/xor.o 00:04:07.902 CC lib/util/zipf.o 00:04:07.902 CC lib/util/md5.o 00:04:07.902 CC lib/vfio_user/host/vfio_user.o 00:04:07.902 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.902 LIB libspdk_dma.a 00:04:08.163 SO libspdk_dma.so.5.0 00:04:08.163 LIB libspdk_ioat.a 00:04:08.163 SYMLINK libspdk_dma.so 00:04:08.163 SO libspdk_ioat.so.7.0 00:04:08.163 SYMLINK libspdk_ioat.so 00:04:08.163 LIB libspdk_vfio_user.a 00:04:08.163 SO libspdk_vfio_user.so.5.0 00:04:08.424 SYMLINK libspdk_vfio_user.so 00:04:08.424 LIB libspdk_util.a 00:04:08.424 SO libspdk_util.so.10.1 00:04:08.686 SYMLINK libspdk_util.so 00:04:08.686 LIB libspdk_trace_parser.a 00:04:08.686 SO libspdk_trace_parser.so.6.0 00:04:08.686 SYMLINK libspdk_trace_parser.so 00:04:08.947 CC lib/idxd/idxd.o 00:04:08.947 CC lib/idxd/idxd_user.o 00:04:08.947 CC lib/idxd/idxd_kernel.o 00:04:08.947 CC lib/json/json_parse.o 00:04:08.947 CC lib/json/json_util.o 00:04:08.947 CC lib/vmd/vmd.o 00:04:08.947 CC lib/json/json_write.o 00:04:08.947 CC lib/rdma_utils/rdma_utils.o 00:04:08.947 CC lib/conf/conf.o 00:04:08.947 CC lib/vmd/led.o 00:04:08.947 CC lib/env_dpdk/env.o 00:04:08.947 CC lib/env_dpdk/memory.o 00:04:08.947 CC lib/env_dpdk/pci.o 00:04:08.947 CC lib/env_dpdk/init.o 00:04:08.947 CC lib/env_dpdk/threads.o 00:04:08.947 CC lib/env_dpdk/pci_ioat.o 00:04:08.947 CC lib/env_dpdk/pci_virtio.o 00:04:08.947 CC lib/env_dpdk/pci_vmd.o 00:04:08.947 CC lib/env_dpdk/pci_idxd.o 00:04:08.947 CC lib/env_dpdk/pci_event.o 00:04:08.947 CC lib/env_dpdk/sigbus_handler.o 00:04:08.947 CC lib/env_dpdk/pci_dpdk.o 00:04:08.947 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:08.947 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:09.209 LIB libspdk_conf.a 00:04:09.209 LIB libspdk_rdma_utils.a 00:04:09.209 SO libspdk_conf.so.6.0 00:04:09.209 LIB libspdk_json.a 00:04:09.209 SO libspdk_rdma_utils.so.1.0 00:04:09.209 SO libspdk_json.so.6.0 00:04:09.209 SYMLINK libspdk_conf.so 00:04:09.209 SYMLINK libspdk_rdma_utils.so 00:04:09.209 SYMLINK libspdk_json.so 00:04:09.470 LIB libspdk_idxd.a 00:04:09.470 SO libspdk_idxd.so.12.1 00:04:09.470 LIB libspdk_vmd.a 00:04:09.470 SO libspdk_vmd.so.6.0 00:04:09.470 SYMLINK libspdk_idxd.so 00:04:09.731 SYMLINK libspdk_vmd.so 00:04:09.731 CC lib/rdma_provider/common.o 00:04:09.731 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:09.731 CC lib/jsonrpc/jsonrpc_server.o 00:04:09.731 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:09.731 CC lib/jsonrpc/jsonrpc_client.o 00:04:09.731 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:09.992 LIB libspdk_rdma_provider.a 00:04:09.992 SO libspdk_rdma_provider.so.7.0 00:04:09.992 LIB libspdk_jsonrpc.a 00:04:09.992 SO libspdk_jsonrpc.so.6.0 00:04:09.992 SYMLINK libspdk_rdma_provider.so 00:04:09.992 SYMLINK libspdk_jsonrpc.so 00:04:10.253 LIB libspdk_env_dpdk.a 00:04:10.253 SO libspdk_env_dpdk.so.15.1 00:04:10.253 SYMLINK libspdk_env_dpdk.so 00:04:10.514 CC lib/rpc/rpc.o 00:04:10.775 LIB libspdk_rpc.a 00:04:10.775 SO libspdk_rpc.so.6.0 00:04:10.775 SYMLINK libspdk_rpc.so 00:04:11.036 CC lib/keyring/keyring.o 00:04:11.036 CC lib/keyring/keyring_rpc.o 00:04:11.036 CC lib/notify/notify.o 00:04:11.036 CC lib/trace/trace.o 00:04:11.036 CC lib/notify/notify_rpc.o 00:04:11.036 CC lib/trace/trace_flags.o 00:04:11.036 CC lib/trace/trace_rpc.o 00:04:11.297 LIB libspdk_notify.a 00:04:11.297 SO libspdk_notify.so.6.0 00:04:11.297 LIB libspdk_keyring.a 00:04:11.297 LIB libspdk_trace.a 00:04:11.297 SO libspdk_keyring.so.2.0 00:04:11.297 SO libspdk_trace.so.11.0 00:04:11.297 SYMLINK libspdk_notify.so 00:04:11.558 SYMLINK libspdk_keyring.so 00:04:11.558 SYMLINK libspdk_trace.so 00:04:11.818 CC lib/sock/sock.o 00:04:11.818 CC lib/thread/thread.o 00:04:11.818 CC lib/sock/sock_rpc.o 00:04:11.818 CC lib/thread/iobuf.o 00:04:12.390 LIB libspdk_sock.a 00:04:12.390 SO libspdk_sock.so.10.0 00:04:12.390 SYMLINK libspdk_sock.so 00:04:12.652 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:12.652 CC lib/nvme/nvme_ctrlr.o 00:04:12.652 CC lib/nvme/nvme_fabric.o 00:04:12.652 CC lib/nvme/nvme_ns_cmd.o 00:04:12.652 CC lib/nvme/nvme_ns.o 00:04:12.652 CC lib/nvme/nvme_pcie_common.o 00:04:12.652 CC lib/nvme/nvme_pcie.o 00:04:12.652 CC lib/nvme/nvme_qpair.o 00:04:12.652 CC lib/nvme/nvme.o 00:04:12.652 CC lib/nvme/nvme_quirks.o 00:04:12.652 CC lib/nvme/nvme_transport.o 00:04:12.652 CC lib/nvme/nvme_discovery.o 00:04:12.652 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:12.652 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:12.652 CC lib/nvme/nvme_tcp.o 00:04:12.652 CC lib/nvme/nvme_opal.o 00:04:12.652 CC lib/nvme/nvme_io_msg.o 00:04:12.652 CC lib/nvme/nvme_poll_group.o 00:04:12.652 CC lib/nvme/nvme_zns.o 00:04:12.652 CC lib/nvme/nvme_stubs.o 00:04:12.652 CC lib/nvme/nvme_auth.o 00:04:12.652 CC lib/nvme/nvme_cuse.o 00:04:12.652 CC lib/nvme/nvme_vfio_user.o 00:04:12.652 CC lib/nvme/nvme_rdma.o 00:04:13.222 LIB libspdk_thread.a 00:04:13.222 SO libspdk_thread.so.11.0 00:04:13.222 SYMLINK libspdk_thread.so 00:04:13.793 CC lib/accel/accel.o 00:04:13.793 CC lib/fsdev/fsdev.o 00:04:13.793 CC lib/accel/accel_rpc.o 00:04:13.793 CC lib/accel/accel_sw.o 00:04:13.793 CC lib/init/json_config.o 00:04:13.793 CC lib/init/subsystem.o 00:04:13.793 CC lib/fsdev/fsdev_io.o 00:04:13.793 CC lib/init/subsystem_rpc.o 00:04:13.793 CC lib/fsdev/fsdev_rpc.o 00:04:13.793 CC lib/init/rpc.o 00:04:13.793 CC lib/virtio/virtio.o 00:04:13.793 CC lib/virtio/virtio_vhost_user.o 00:04:13.793 CC lib/virtio/virtio_vfio_user.o 00:04:13.793 CC lib/virtio/virtio_pci.o 00:04:13.793 CC lib/vfu_tgt/tgt_endpoint.o 00:04:13.793 CC lib/blob/blobstore.o 00:04:13.793 CC lib/vfu_tgt/tgt_rpc.o 00:04:13.793 CC lib/blob/request.o 00:04:13.793 CC lib/blob/zeroes.o 00:04:13.793 CC lib/blob/blob_bs_dev.o 00:04:14.054 LIB libspdk_init.a 00:04:14.054 SO libspdk_init.so.6.0 00:04:14.054 LIB libspdk_virtio.a 00:04:14.054 SYMLINK libspdk_init.so 00:04:14.054 LIB libspdk_vfu_tgt.a 00:04:14.054 SO libspdk_virtio.so.7.0 00:04:14.054 SO libspdk_vfu_tgt.so.3.0 00:04:14.314 SYMLINK libspdk_virtio.so 00:04:14.314 SYMLINK libspdk_vfu_tgt.so 00:04:14.314 LIB libspdk_fsdev.a 00:04:14.314 SO libspdk_fsdev.so.2.0 00:04:14.314 CC lib/event/app.o 00:04:14.314 CC lib/event/reactor.o 00:04:14.314 CC lib/event/log_rpc.o 00:04:14.314 CC lib/event/app_rpc.o 00:04:14.314 CC lib/event/scheduler_static.o 00:04:14.314 SYMLINK libspdk_fsdev.so 00:04:14.575 LIB libspdk_accel.a 00:04:14.575 SO libspdk_accel.so.16.0 00:04:14.837 LIB libspdk_nvme.a 00:04:14.837 SYMLINK libspdk_accel.so 00:04:14.837 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:14.837 LIB libspdk_event.a 00:04:14.837 SO libspdk_event.so.14.0 00:04:14.837 SO libspdk_nvme.so.15.0 00:04:15.097 SYMLINK libspdk_event.so 00:04:15.097 SYMLINK libspdk_nvme.so 00:04:15.097 CC lib/bdev/bdev.o 00:04:15.098 CC lib/bdev/bdev_rpc.o 00:04:15.098 CC lib/bdev/bdev_zone.o 00:04:15.098 CC lib/bdev/part.o 00:04:15.098 CC lib/bdev/scsi_nvme.o 00:04:15.358 LIB libspdk_fuse_dispatcher.a 00:04:15.358 SO libspdk_fuse_dispatcher.so.1.0 00:04:15.620 SYMLINK libspdk_fuse_dispatcher.so 00:04:16.565 LIB libspdk_blob.a 00:04:16.565 SO libspdk_blob.so.12.0 00:04:16.565 SYMLINK libspdk_blob.so 00:04:16.826 CC lib/lvol/lvol.o 00:04:16.826 CC lib/blobfs/blobfs.o 00:04:16.826 CC lib/blobfs/tree.o 00:04:17.771 LIB libspdk_bdev.a 00:04:17.771 SO libspdk_bdev.so.17.0 00:04:17.771 LIB libspdk_blobfs.a 00:04:17.771 SO libspdk_blobfs.so.11.0 00:04:17.771 SYMLINK libspdk_bdev.so 00:04:17.771 LIB libspdk_lvol.a 00:04:17.771 SO libspdk_lvol.so.11.0 00:04:17.771 SYMLINK libspdk_blobfs.so 00:04:17.771 SYMLINK libspdk_lvol.so 00:04:18.033 CC lib/nbd/nbd.o 00:04:18.033 CC lib/nbd/nbd_rpc.o 00:04:18.033 CC lib/nvmf/ctrlr.o 00:04:18.033 CC lib/nvmf/ctrlr_discovery.o 00:04:18.033 CC lib/nvmf/ctrlr_bdev.o 00:04:18.033 CC lib/nvmf/subsystem.o 00:04:18.033 CC lib/scsi/dev.o 00:04:18.033 CC lib/nvmf/nvmf.o 00:04:18.033 CC lib/ublk/ublk.o 00:04:18.033 CC lib/nvmf/nvmf_rpc.o 00:04:18.033 CC lib/scsi/lun.o 00:04:18.033 CC lib/ftl/ftl_core.o 00:04:18.033 CC lib/scsi/port.o 00:04:18.033 CC lib/ublk/ublk_rpc.o 00:04:18.033 CC lib/ftl/ftl_init.o 00:04:18.033 CC lib/nvmf/transport.o 00:04:18.033 CC lib/ftl/ftl_layout.o 00:04:18.033 CC lib/scsi/scsi.o 00:04:18.033 CC lib/nvmf/tcp.o 00:04:18.033 CC lib/scsi/scsi_bdev.o 00:04:18.033 CC lib/ftl/ftl_debug.o 00:04:18.033 CC lib/nvmf/stubs.o 00:04:18.033 CC lib/scsi/scsi_pr.o 00:04:18.033 CC lib/ftl/ftl_io.o 00:04:18.033 CC lib/scsi/scsi_rpc.o 00:04:18.033 CC lib/nvmf/mdns_server.o 00:04:18.033 CC lib/ftl/ftl_sb.o 00:04:18.033 CC lib/scsi/task.o 00:04:18.033 CC lib/nvmf/vfio_user.o 00:04:18.033 CC lib/ftl/ftl_l2p.o 00:04:18.033 CC lib/nvmf/rdma.o 00:04:18.033 CC lib/ftl/ftl_l2p_flat.o 00:04:18.033 CC lib/nvmf/auth.o 00:04:18.033 CC lib/ftl/ftl_nv_cache.o 00:04:18.033 CC lib/ftl/ftl_band.o 00:04:18.033 CC lib/ftl/ftl_band_ops.o 00:04:18.033 CC lib/ftl/ftl_writer.o 00:04:18.033 CC lib/ftl/ftl_rq.o 00:04:18.033 CC lib/ftl/ftl_reloc.o 00:04:18.033 CC lib/ftl/ftl_l2p_cache.o 00:04:18.033 CC lib/ftl/ftl_p2l.o 00:04:18.033 CC lib/ftl/ftl_p2l_log.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:18.033 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:18.033 CC lib/ftl/utils/ftl_conf.o 00:04:18.033 CC lib/ftl/utils/ftl_mempool.o 00:04:18.033 CC lib/ftl/utils/ftl_md.o 00:04:18.293 CC lib/ftl/utils/ftl_bitmap.o 00:04:18.293 CC lib/ftl/utils/ftl_property.o 00:04:18.293 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:18.293 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:18.293 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:18.293 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:18.293 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:18.293 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:18.293 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:18.293 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:18.293 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:18.293 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:18.293 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:18.293 CC lib/ftl/base/ftl_base_dev.o 00:04:18.293 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:18.293 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:18.293 CC lib/ftl/base/ftl_base_bdev.o 00:04:18.293 CC lib/ftl/ftl_trace.o 00:04:18.865 LIB libspdk_nbd.a 00:04:18.865 SO libspdk_nbd.so.7.0 00:04:18.865 SYMLINK libspdk_nbd.so 00:04:18.865 LIB libspdk_scsi.a 00:04:18.865 SO libspdk_scsi.so.9.0 00:04:19.127 LIB libspdk_ublk.a 00:04:19.127 SYMLINK libspdk_scsi.so 00:04:19.127 SO libspdk_ublk.so.3.0 00:04:19.127 SYMLINK libspdk_ublk.so 00:04:19.388 LIB libspdk_ftl.a 00:04:19.388 CC lib/vhost/vhost.o 00:04:19.388 CC lib/iscsi/conn.o 00:04:19.388 CC lib/vhost/vhost_rpc.o 00:04:19.388 CC lib/iscsi/init_grp.o 00:04:19.388 CC lib/iscsi/iscsi.o 00:04:19.388 CC lib/vhost/vhost_scsi.o 00:04:19.388 CC lib/iscsi/param.o 00:04:19.388 CC lib/vhost/vhost_blk.o 00:04:19.388 CC lib/iscsi/portal_grp.o 00:04:19.388 CC lib/vhost/rte_vhost_user.o 00:04:19.388 CC lib/iscsi/tgt_node.o 00:04:19.388 CC lib/iscsi/iscsi_subsystem.o 00:04:19.388 CC lib/iscsi/iscsi_rpc.o 00:04:19.388 CC lib/iscsi/task.o 00:04:19.650 SO libspdk_ftl.so.9.0 00:04:19.912 SYMLINK libspdk_ftl.so 00:04:20.173 LIB libspdk_nvmf.a 00:04:20.435 SO libspdk_nvmf.so.20.0 00:04:20.435 LIB libspdk_vhost.a 00:04:20.435 SO libspdk_vhost.so.8.0 00:04:20.435 SYMLINK libspdk_nvmf.so 00:04:20.696 SYMLINK libspdk_vhost.so 00:04:20.696 LIB libspdk_iscsi.a 00:04:20.696 SO libspdk_iscsi.so.8.0 00:04:20.957 SYMLINK libspdk_iscsi.so 00:04:21.530 CC module/vfu_device/vfu_virtio.o 00:04:21.530 CC module/vfu_device/vfu_virtio_blk.o 00:04:21.530 CC module/env_dpdk/env_dpdk_rpc.o 00:04:21.530 CC module/vfu_device/vfu_virtio_rpc.o 00:04:21.530 CC module/vfu_device/vfu_virtio_scsi.o 00:04:21.530 CC module/vfu_device/vfu_virtio_fs.o 00:04:21.530 CC module/accel/error/accel_error.o 00:04:21.530 CC module/accel/error/accel_error_rpc.o 00:04:21.530 CC module/keyring/file/keyring.o 00:04:21.530 CC module/keyring/file/keyring_rpc.o 00:04:21.530 CC module/accel/iaa/accel_iaa.o 00:04:21.530 CC module/keyring/linux/keyring.o 00:04:21.530 LIB libspdk_env_dpdk_rpc.a 00:04:21.530 CC module/keyring/linux/keyring_rpc.o 00:04:21.530 CC module/accel/iaa/accel_iaa_rpc.o 00:04:21.530 CC module/accel/ioat/accel_ioat.o 00:04:21.530 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.530 CC module/accel/dsa/accel_dsa.o 00:04:21.530 CC module/accel/dsa/accel_dsa_rpc.o 00:04:21.530 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:21.530 CC module/blob/bdev/blob_bdev.o 00:04:21.530 CC module/fsdev/aio/fsdev_aio.o 00:04:21.530 CC module/scheduler/gscheduler/gscheduler.o 00:04:21.530 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:21.530 CC module/fsdev/aio/linux_aio_mgr.o 00:04:21.530 CC module/sock/posix/posix.o 00:04:21.530 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:21.792 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.792 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.792 LIB libspdk_keyring_linux.a 00:04:21.792 LIB libspdk_scheduler_gscheduler.a 00:04:21.792 LIB libspdk_keyring_file.a 00:04:21.792 LIB libspdk_scheduler_dpdk_governor.a 00:04:21.792 SO libspdk_keyring_linux.so.1.0 00:04:21.792 LIB libspdk_accel_error.a 00:04:21.792 SO libspdk_scheduler_gscheduler.so.4.0 00:04:21.792 LIB libspdk_accel_iaa.a 00:04:21.792 SO libspdk_keyring_file.so.2.0 00:04:21.792 LIB libspdk_scheduler_dynamic.a 00:04:21.792 LIB libspdk_accel_ioat.a 00:04:21.792 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:21.792 SO libspdk_accel_error.so.2.0 00:04:21.792 SO libspdk_accel_iaa.so.3.0 00:04:21.792 SO libspdk_scheduler_dynamic.so.4.0 00:04:22.054 SO libspdk_accel_ioat.so.6.0 00:04:22.054 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.054 SYMLINK libspdk_keyring_linux.so 00:04:22.054 SYMLINK libspdk_keyring_file.so 00:04:22.054 LIB libspdk_blob_bdev.a 00:04:22.054 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.054 LIB libspdk_accel_dsa.a 00:04:22.054 SYMLINK libspdk_scheduler_dynamic.so 00:04:22.054 SYMLINK libspdk_accel_error.so 00:04:22.054 SYMLINK libspdk_accel_iaa.so 00:04:22.054 SO libspdk_blob_bdev.so.12.0 00:04:22.054 SYMLINK libspdk_accel_ioat.so 00:04:22.054 SO libspdk_accel_dsa.so.5.0 00:04:22.054 LIB libspdk_vfu_device.a 00:04:22.054 SYMLINK libspdk_blob_bdev.so 00:04:22.054 SYMLINK libspdk_accel_dsa.so 00:04:22.054 SO libspdk_vfu_device.so.3.0 00:04:22.316 SYMLINK libspdk_vfu_device.so 00:04:22.316 LIB libspdk_fsdev_aio.a 00:04:22.316 SO libspdk_fsdev_aio.so.1.0 00:04:22.316 LIB libspdk_sock_posix.a 00:04:22.316 SO libspdk_sock_posix.so.6.0 00:04:22.316 SYMLINK libspdk_fsdev_aio.so 00:04:22.577 SYMLINK libspdk_sock_posix.so 00:04:22.577 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:22.577 CC module/bdev/malloc/bdev_malloc.o 00:04:22.577 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.577 CC module/bdev/gpt/gpt.o 00:04:22.577 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.577 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:22.577 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.577 CC module/bdev/delay/vbdev_delay.o 00:04:22.577 CC module/bdev/error/vbdev_error.o 00:04:22.577 CC module/bdev/lvol/vbdev_lvol.o 00:04:22.577 CC module/bdev/error/vbdev_error_rpc.o 00:04:22.577 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:22.577 CC module/bdev/null/bdev_null.o 00:04:22.577 CC module/bdev/null/bdev_null_rpc.o 00:04:22.577 CC module/bdev/raid/bdev_raid.o 00:04:22.577 CC module/bdev/raid/bdev_raid_rpc.o 00:04:22.577 CC module/bdev/raid/bdev_raid_sb.o 00:04:22.577 CC module/bdev/raid/raid0.o 00:04:22.577 CC module/bdev/iscsi/bdev_iscsi.o 00:04:22.577 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:22.577 CC module/bdev/raid/raid1.o 00:04:22.577 CC module/bdev/split/vbdev_split.o 00:04:22.577 CC module/bdev/aio/bdev_aio.o 00:04:22.577 CC module/bdev/raid/concat.o 00:04:22.577 CC module/bdev/split/vbdev_split_rpc.o 00:04:22.577 CC module/bdev/aio/bdev_aio_rpc.o 00:04:22.577 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:22.577 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:22.577 CC module/bdev/passthru/vbdev_passthru.o 00:04:22.577 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:22.577 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:22.577 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:22.577 CC module/bdev/ftl/bdev_ftl.o 00:04:22.577 CC module/bdev/nvme/bdev_nvme.o 00:04:22.577 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:22.577 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:22.577 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:22.577 CC module/bdev/nvme/nvme_rpc.o 00:04:22.577 CC module/bdev/nvme/bdev_mdns_client.o 00:04:22.577 CC module/bdev/nvme/vbdev_opal.o 00:04:22.577 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:22.577 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:22.838 LIB libspdk_blobfs_bdev.a 00:04:23.098 SO libspdk_blobfs_bdev.so.6.0 00:04:23.098 LIB libspdk_bdev_gpt.a 00:04:23.098 LIB libspdk_bdev_split.a 00:04:23.098 LIB libspdk_bdev_null.a 00:04:23.098 LIB libspdk_bdev_error.a 00:04:23.098 SYMLINK libspdk_blobfs_bdev.so 00:04:23.098 SO libspdk_bdev_gpt.so.6.0 00:04:23.098 SO libspdk_bdev_split.so.6.0 00:04:23.098 SO libspdk_bdev_null.so.6.0 00:04:23.098 LIB libspdk_bdev_ftl.a 00:04:23.098 SO libspdk_bdev_error.so.6.0 00:04:23.098 LIB libspdk_bdev_malloc.a 00:04:23.098 LIB libspdk_bdev_passthru.a 00:04:23.098 SO libspdk_bdev_ftl.so.6.0 00:04:23.098 LIB libspdk_bdev_aio.a 00:04:23.098 LIB libspdk_bdev_zone_block.a 00:04:23.098 LIB libspdk_bdev_iscsi.a 00:04:23.098 LIB libspdk_bdev_delay.a 00:04:23.098 SYMLINK libspdk_bdev_gpt.so 00:04:23.098 SYMLINK libspdk_bdev_split.so 00:04:23.098 SO libspdk_bdev_malloc.so.6.0 00:04:23.098 SYMLINK libspdk_bdev_null.so 00:04:23.098 SO libspdk_bdev_passthru.so.6.0 00:04:23.098 SYMLINK libspdk_bdev_error.so 00:04:23.098 SO libspdk_bdev_zone_block.so.6.0 00:04:23.098 SO libspdk_bdev_aio.so.6.0 00:04:23.098 SO libspdk_bdev_iscsi.so.6.0 00:04:23.098 SO libspdk_bdev_delay.so.6.0 00:04:23.098 SYMLINK libspdk_bdev_ftl.so 00:04:23.359 SYMLINK libspdk_bdev_malloc.so 00:04:23.359 SYMLINK libspdk_bdev_passthru.so 00:04:23.359 SYMLINK libspdk_bdev_zone_block.so 00:04:23.359 SYMLINK libspdk_bdev_aio.so 00:04:23.359 SYMLINK libspdk_bdev_iscsi.so 00:04:23.359 LIB libspdk_bdev_lvol.a 00:04:23.359 SYMLINK libspdk_bdev_delay.so 00:04:23.359 LIB libspdk_bdev_virtio.a 00:04:23.359 SO libspdk_bdev_lvol.so.6.0 00:04:23.359 SO libspdk_bdev_virtio.so.6.0 00:04:23.359 SYMLINK libspdk_bdev_lvol.so 00:04:23.359 SYMLINK libspdk_bdev_virtio.so 00:04:23.620 LIB libspdk_bdev_raid.a 00:04:23.620 SO libspdk_bdev_raid.so.6.0 00:04:23.881 SYMLINK libspdk_bdev_raid.so 00:04:25.266 LIB libspdk_bdev_nvme.a 00:04:25.266 SO libspdk_bdev_nvme.so.7.1 00:04:25.266 SYMLINK libspdk_bdev_nvme.so 00:04:25.836 CC module/event/subsystems/iobuf/iobuf.o 00:04:25.836 CC module/event/subsystems/vmd/vmd.o 00:04:25.836 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:25.836 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:25.836 CC module/event/subsystems/fsdev/fsdev.o 00:04:25.836 CC module/event/subsystems/keyring/keyring.o 00:04:25.836 CC module/event/subsystems/scheduler/scheduler.o 00:04:25.836 CC module/event/subsystems/sock/sock.o 00:04:25.836 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:25.836 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:26.097 LIB libspdk_event_fsdev.a 00:04:26.097 LIB libspdk_event_sock.a 00:04:26.097 LIB libspdk_event_keyring.a 00:04:26.097 LIB libspdk_event_vmd.a 00:04:26.097 LIB libspdk_event_vhost_blk.a 00:04:26.097 LIB libspdk_event_scheduler.a 00:04:26.097 LIB libspdk_event_vfu_tgt.a 00:04:26.097 LIB libspdk_event_iobuf.a 00:04:26.097 SO libspdk_event_fsdev.so.1.0 00:04:26.097 SO libspdk_event_sock.so.5.0 00:04:26.097 SO libspdk_event_keyring.so.1.0 00:04:26.097 SO libspdk_event_vhost_blk.so.3.0 00:04:26.097 SO libspdk_event_vmd.so.6.0 00:04:26.097 SO libspdk_event_scheduler.so.4.0 00:04:26.097 SO libspdk_event_vfu_tgt.so.3.0 00:04:26.097 SO libspdk_event_iobuf.so.3.0 00:04:26.097 SYMLINK libspdk_event_fsdev.so 00:04:26.097 SYMLINK libspdk_event_sock.so 00:04:26.097 SYMLINK libspdk_event_keyring.so 00:04:26.097 SYMLINK libspdk_event_vhost_blk.so 00:04:26.359 SYMLINK libspdk_event_scheduler.so 00:04:26.359 SYMLINK libspdk_event_vmd.so 00:04:26.359 SYMLINK libspdk_event_vfu_tgt.so 00:04:26.359 SYMLINK libspdk_event_iobuf.so 00:04:26.620 CC module/event/subsystems/accel/accel.o 00:04:26.882 LIB libspdk_event_accel.a 00:04:26.882 SO libspdk_event_accel.so.6.0 00:04:26.882 SYMLINK libspdk_event_accel.so 00:04:27.144 CC module/event/subsystems/bdev/bdev.o 00:04:27.405 LIB libspdk_event_bdev.a 00:04:27.405 SO libspdk_event_bdev.so.6.0 00:04:27.405 SYMLINK libspdk_event_bdev.so 00:04:27.977 CC module/event/subsystems/nbd/nbd.o 00:04:27.977 CC module/event/subsystems/ublk/ublk.o 00:04:27.977 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:27.977 CC module/event/subsystems/scsi/scsi.o 00:04:27.977 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:27.977 LIB libspdk_event_nbd.a 00:04:27.977 LIB libspdk_event_ublk.a 00:04:27.977 LIB libspdk_event_scsi.a 00:04:27.977 SO libspdk_event_nbd.so.6.0 00:04:28.238 SO libspdk_event_ublk.so.3.0 00:04:28.238 SO libspdk_event_scsi.so.6.0 00:04:28.238 LIB libspdk_event_nvmf.a 00:04:28.238 SYMLINK libspdk_event_nbd.so 00:04:28.238 SYMLINK libspdk_event_ublk.so 00:04:28.238 SYMLINK libspdk_event_scsi.so 00:04:28.238 SO libspdk_event_nvmf.so.6.0 00:04:28.238 SYMLINK libspdk_event_nvmf.so 00:04:28.500 CC module/event/subsystems/iscsi/iscsi.o 00:04:28.500 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:28.760 LIB libspdk_event_vhost_scsi.a 00:04:28.760 LIB libspdk_event_iscsi.a 00:04:28.760 SO libspdk_event_vhost_scsi.so.3.0 00:04:28.760 SO libspdk_event_iscsi.so.6.0 00:04:28.760 SYMLINK libspdk_event_vhost_scsi.so 00:04:28.760 SYMLINK libspdk_event_iscsi.so 00:04:29.021 SO libspdk.so.6.0 00:04:29.022 SYMLINK libspdk.so 00:04:29.597 CC app/trace_record/trace_record.o 00:04:29.597 CXX app/trace/trace.o 00:04:29.597 CC app/spdk_lspci/spdk_lspci.o 00:04:29.597 CC test/rpc_client/rpc_client_test.o 00:04:29.597 CC app/spdk_nvme_discover/discovery_aer.o 00:04:29.597 CC app/spdk_top/spdk_top.o 00:04:29.597 CC app/spdk_nvme_perf/perf.o 00:04:29.597 CC app/spdk_nvme_identify/identify.o 00:04:29.597 TEST_HEADER include/spdk/accel.h 00:04:29.597 TEST_HEADER include/spdk/accel_module.h 00:04:29.597 TEST_HEADER include/spdk/assert.h 00:04:29.597 TEST_HEADER include/spdk/barrier.h 00:04:29.597 TEST_HEADER include/spdk/bdev.h 00:04:29.597 TEST_HEADER include/spdk/base64.h 00:04:29.597 TEST_HEADER include/spdk/bdev_module.h 00:04:29.597 TEST_HEADER include/spdk/bdev_zone.h 00:04:29.597 TEST_HEADER include/spdk/bit_array.h 00:04:29.597 TEST_HEADER include/spdk/bit_pool.h 00:04:29.597 TEST_HEADER include/spdk/blob_bdev.h 00:04:29.597 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:29.597 TEST_HEADER include/spdk/blobfs.h 00:04:29.597 TEST_HEADER include/spdk/blob.h 00:04:29.597 TEST_HEADER include/spdk/config.h 00:04:29.597 TEST_HEADER include/spdk/conf.h 00:04:29.597 TEST_HEADER include/spdk/cpuset.h 00:04:29.597 TEST_HEADER include/spdk/crc16.h 00:04:29.597 TEST_HEADER include/spdk/crc32.h 00:04:29.597 TEST_HEADER include/spdk/crc64.h 00:04:29.597 TEST_HEADER include/spdk/dif.h 00:04:29.597 CC app/iscsi_tgt/iscsi_tgt.o 00:04:29.597 TEST_HEADER include/spdk/dma.h 00:04:29.597 TEST_HEADER include/spdk/endian.h 00:04:29.597 TEST_HEADER include/spdk/env_dpdk.h 00:04:29.597 TEST_HEADER include/spdk/event.h 00:04:29.597 TEST_HEADER include/spdk/env.h 00:04:29.597 TEST_HEADER include/spdk/fd_group.h 00:04:29.597 TEST_HEADER include/spdk/fd.h 00:04:29.597 TEST_HEADER include/spdk/file.h 00:04:29.597 TEST_HEADER include/spdk/fsdev.h 00:04:29.597 TEST_HEADER include/spdk/fsdev_module.h 00:04:29.597 TEST_HEADER include/spdk/ftl.h 00:04:29.597 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:29.597 CC app/spdk_dd/spdk_dd.o 00:04:29.597 TEST_HEADER include/spdk/hexlify.h 00:04:29.597 TEST_HEADER include/spdk/gpt_spec.h 00:04:29.597 TEST_HEADER include/spdk/histogram_data.h 00:04:29.598 TEST_HEADER include/spdk/idxd.h 00:04:29.598 TEST_HEADER include/spdk/idxd_spec.h 00:04:29.598 TEST_HEADER include/spdk/init.h 00:04:29.598 TEST_HEADER include/spdk/ioat.h 00:04:29.598 TEST_HEADER include/spdk/ioat_spec.h 00:04:29.598 TEST_HEADER include/spdk/json.h 00:04:29.598 TEST_HEADER include/spdk/iscsi_spec.h 00:04:29.598 TEST_HEADER include/spdk/jsonrpc.h 00:04:29.598 TEST_HEADER include/spdk/keyring.h 00:04:29.598 TEST_HEADER include/spdk/keyring_module.h 00:04:29.598 TEST_HEADER include/spdk/likely.h 00:04:29.598 TEST_HEADER include/spdk/log.h 00:04:29.598 TEST_HEADER include/spdk/lvol.h 00:04:29.598 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:29.598 TEST_HEADER include/spdk/md5.h 00:04:29.598 TEST_HEADER include/spdk/memory.h 00:04:29.598 TEST_HEADER include/spdk/mmio.h 00:04:29.598 TEST_HEADER include/spdk/nbd.h 00:04:29.598 CC app/spdk_tgt/spdk_tgt.o 00:04:29.598 TEST_HEADER include/spdk/net.h 00:04:29.598 TEST_HEADER include/spdk/notify.h 00:04:29.598 TEST_HEADER include/spdk/nvme.h 00:04:29.598 TEST_HEADER include/spdk/nvme_intel.h 00:04:29.598 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:29.598 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:29.598 CC app/nvmf_tgt/nvmf_main.o 00:04:29.598 TEST_HEADER include/spdk/nvme_spec.h 00:04:29.598 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:29.598 TEST_HEADER include/spdk/nvme_zns.h 00:04:29.598 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:29.598 TEST_HEADER include/spdk/nvmf.h 00:04:29.598 TEST_HEADER include/spdk/nvmf_spec.h 00:04:29.598 TEST_HEADER include/spdk/nvmf_transport.h 00:04:29.598 TEST_HEADER include/spdk/opal.h 00:04:29.598 TEST_HEADER include/spdk/pci_ids.h 00:04:29.598 TEST_HEADER include/spdk/opal_spec.h 00:04:29.598 TEST_HEADER include/spdk/pipe.h 00:04:29.598 TEST_HEADER include/spdk/queue.h 00:04:29.598 TEST_HEADER include/spdk/reduce.h 00:04:29.598 TEST_HEADER include/spdk/rpc.h 00:04:29.598 TEST_HEADER include/spdk/scheduler.h 00:04:29.598 TEST_HEADER include/spdk/scsi.h 00:04:29.598 TEST_HEADER include/spdk/scsi_spec.h 00:04:29.598 TEST_HEADER include/spdk/sock.h 00:04:29.598 TEST_HEADER include/spdk/stdinc.h 00:04:29.598 TEST_HEADER include/spdk/string.h 00:04:29.598 TEST_HEADER include/spdk/trace.h 00:04:29.598 TEST_HEADER include/spdk/thread.h 00:04:29.598 TEST_HEADER include/spdk/trace_parser.h 00:04:29.598 TEST_HEADER include/spdk/tree.h 00:04:29.598 TEST_HEADER include/spdk/ublk.h 00:04:29.598 TEST_HEADER include/spdk/uuid.h 00:04:29.598 TEST_HEADER include/spdk/util.h 00:04:29.598 TEST_HEADER include/spdk/version.h 00:04:29.598 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:29.598 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:29.598 TEST_HEADER include/spdk/vmd.h 00:04:29.598 TEST_HEADER include/spdk/vhost.h 00:04:29.598 TEST_HEADER include/spdk/xor.h 00:04:29.598 TEST_HEADER include/spdk/zipf.h 00:04:29.598 CXX test/cpp_headers/accel.o 00:04:29.598 CXX test/cpp_headers/accel_module.o 00:04:29.598 CXX test/cpp_headers/assert.o 00:04:29.598 CXX test/cpp_headers/barrier.o 00:04:29.598 CXX test/cpp_headers/base64.o 00:04:29.598 CXX test/cpp_headers/bdev_zone.o 00:04:29.598 CXX test/cpp_headers/bdev.o 00:04:29.598 CXX test/cpp_headers/bdev_module.o 00:04:29.598 CXX test/cpp_headers/bit_array.o 00:04:29.598 CXX test/cpp_headers/bit_pool.o 00:04:29.598 CXX test/cpp_headers/blobfs_bdev.o 00:04:29.598 CXX test/cpp_headers/blob_bdev.o 00:04:29.598 CXX test/cpp_headers/blobfs.o 00:04:29.598 CXX test/cpp_headers/blob.o 00:04:29.598 CXX test/cpp_headers/config.o 00:04:29.598 CXX test/cpp_headers/conf.o 00:04:29.598 CXX test/cpp_headers/crc16.o 00:04:29.598 CXX test/cpp_headers/cpuset.o 00:04:29.598 CXX test/cpp_headers/crc64.o 00:04:29.598 CXX test/cpp_headers/crc32.o 00:04:29.598 CXX test/cpp_headers/dif.o 00:04:29.598 CXX test/cpp_headers/dma.o 00:04:29.598 CXX test/cpp_headers/endian.o 00:04:29.598 CXX test/cpp_headers/env_dpdk.o 00:04:29.598 CXX test/cpp_headers/fd_group.o 00:04:29.598 CXX test/cpp_headers/env.o 00:04:29.598 CXX test/cpp_headers/event.o 00:04:29.598 CXX test/cpp_headers/fsdev.o 00:04:29.598 CXX test/cpp_headers/fd.o 00:04:29.598 CXX test/cpp_headers/file.o 00:04:29.598 CXX test/cpp_headers/fsdev_module.o 00:04:29.598 CXX test/cpp_headers/ftl.o 00:04:29.598 CXX test/cpp_headers/fuse_dispatcher.o 00:04:29.598 CXX test/cpp_headers/gpt_spec.o 00:04:29.598 CXX test/cpp_headers/hexlify.o 00:04:29.598 CXX test/cpp_headers/histogram_data.o 00:04:29.598 CXX test/cpp_headers/idxd.o 00:04:29.598 CXX test/cpp_headers/idxd_spec.o 00:04:29.598 CXX test/cpp_headers/init.o 00:04:29.598 CXX test/cpp_headers/ioat.o 00:04:29.598 CXX test/cpp_headers/iscsi_spec.o 00:04:29.598 CXX test/cpp_headers/jsonrpc.o 00:04:29.598 CXX test/cpp_headers/json.o 00:04:29.598 CXX test/cpp_headers/ioat_spec.o 00:04:29.598 CXX test/cpp_headers/keyring_module.o 00:04:29.598 CXX test/cpp_headers/likely.o 00:04:29.598 CXX test/cpp_headers/keyring.o 00:04:29.598 CXX test/cpp_headers/md5.o 00:04:29.598 CXX test/cpp_headers/memory.o 00:04:29.598 CXX test/cpp_headers/lvol.o 00:04:29.598 CXX test/cpp_headers/log.o 00:04:29.598 CXX test/cpp_headers/mmio.o 00:04:29.598 CC test/thread/poller_perf/poller_perf.o 00:04:29.863 CXX test/cpp_headers/notify.o 00:04:29.864 CXX test/cpp_headers/nbd.o 00:04:29.864 CXX test/cpp_headers/net.o 00:04:29.864 CXX test/cpp_headers/nvme_intel.o 00:04:29.864 CXX test/cpp_headers/nvme.o 00:04:29.864 CXX test/cpp_headers/nvme_ocssd.o 00:04:29.864 CXX test/cpp_headers/nvme_spec.o 00:04:29.864 CXX test/cpp_headers/nvme_zns.o 00:04:29.864 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:29.864 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:29.864 CC test/env/vtophys/vtophys.o 00:04:29.864 CXX test/cpp_headers/nvmf_cmd.o 00:04:29.864 LINK spdk_lspci 00:04:29.864 CXX test/cpp_headers/nvmf_transport.o 00:04:29.864 CXX test/cpp_headers/nvmf.o 00:04:29.864 CXX test/cpp_headers/opal.o 00:04:29.864 CXX test/cpp_headers/nvmf_spec.o 00:04:29.864 CXX test/cpp_headers/opal_spec.o 00:04:29.864 CXX test/cpp_headers/pci_ids.o 00:04:29.864 CC test/app/histogram_perf/histogram_perf.o 00:04:29.864 CXX test/cpp_headers/pipe.o 00:04:29.864 CC examples/util/zipf/zipf.o 00:04:29.864 CXX test/cpp_headers/queue.o 00:04:29.864 CXX test/cpp_headers/reduce.o 00:04:29.864 CC examples/ioat/verify/verify.o 00:04:29.864 CXX test/cpp_headers/rpc.o 00:04:29.864 CC app/fio/nvme/fio_plugin.o 00:04:29.864 CXX test/cpp_headers/stdinc.o 00:04:29.864 CC test/app/stub/stub.o 00:04:29.864 CXX test/cpp_headers/scheduler.o 00:04:29.864 CXX test/cpp_headers/sock.o 00:04:29.864 CXX test/cpp_headers/scsi.o 00:04:29.864 CXX test/cpp_headers/scsi_spec.o 00:04:29.864 CXX test/cpp_headers/string.o 00:04:29.864 CC examples/ioat/perf/perf.o 00:04:29.864 CXX test/cpp_headers/trace.o 00:04:29.864 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:29.864 CXX test/cpp_headers/thread.o 00:04:29.864 CC test/app/jsoncat/jsoncat.o 00:04:29.864 CC test/env/memory/memory_ut.o 00:04:29.864 CXX test/cpp_headers/ublk.o 00:04:29.864 CXX test/cpp_headers/trace_parser.o 00:04:29.864 CXX test/cpp_headers/tree.o 00:04:29.864 CXX test/cpp_headers/util.o 00:04:29.864 CXX test/cpp_headers/uuid.o 00:04:29.864 CXX test/cpp_headers/version.o 00:04:29.864 CXX test/cpp_headers/vfio_user_pci.o 00:04:29.864 CXX test/cpp_headers/vfio_user_spec.o 00:04:29.864 CXX test/cpp_headers/vmd.o 00:04:29.864 CXX test/cpp_headers/vhost.o 00:04:29.864 CXX test/cpp_headers/xor.o 00:04:29.864 LINK rpc_client_test 00:04:29.864 CXX test/cpp_headers/zipf.o 00:04:29.864 CC test/app/bdev_svc/bdev_svc.o 00:04:29.864 CC test/dma/test_dma/test_dma.o 00:04:29.864 CC test/env/pci/pci_ut.o 00:04:29.864 CC app/fio/bdev/fio_plugin.o 00:04:29.864 LINK spdk_trace_record 00:04:29.864 LINK spdk_nvme_discover 00:04:30.134 LINK iscsi_tgt 00:04:30.134 LINK interrupt_tgt 00:04:30.400 LINK nvmf_tgt 00:04:30.401 LINK spdk_tgt 00:04:30.401 LINK poller_perf 00:04:30.401 CC test/env/mem_callbacks/mem_callbacks.o 00:04:30.401 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:30.401 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:30.401 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.401 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:30.663 LINK histogram_perf 00:04:30.663 LINK spdk_dd 00:04:30.663 LINK spdk_trace 00:04:30.924 LINK jsoncat 00:04:30.924 LINK vtophys 00:04:30.924 LINK bdev_svc 00:04:30.924 LINK zipf 00:04:30.924 LINK env_dpdk_post_init 00:04:30.924 LINK stub 00:04:30.924 LINK ioat_perf 00:04:30.924 LINK verify 00:04:30.924 LINK test_dma 00:04:31.185 CC test/event/reactor_perf/reactor_perf.o 00:04:31.185 LINK pci_ut 00:04:31.185 LINK nvme_fuzz 00:04:31.185 CC test/event/event_perf/event_perf.o 00:04:31.185 CC test/event/reactor/reactor.o 00:04:31.185 LINK vhost_fuzz 00:04:31.185 CC test/event/app_repeat/app_repeat.o 00:04:31.185 CC test/event/scheduler/scheduler.o 00:04:31.185 CC app/vhost/vhost.o 00:04:31.185 LINK spdk_nvme_perf 00:04:31.445 LINK spdk_bdev 00:04:31.445 LINK spdk_nvme 00:04:31.445 LINK spdk_nvme_identify 00:04:31.445 LINK event_perf 00:04:31.445 LINK mem_callbacks 00:04:31.445 LINK reactor_perf 00:04:31.445 LINK reactor 00:04:31.445 CC examples/idxd/perf/perf.o 00:04:31.445 LINK spdk_top 00:04:31.445 CC examples/vmd/lsvmd/lsvmd.o 00:04:31.445 CC examples/vmd/led/led.o 00:04:31.445 CC examples/sock/hello_world/hello_sock.o 00:04:31.445 LINK app_repeat 00:04:31.445 CC examples/thread/thread/thread_ex.o 00:04:31.445 LINK vhost 00:04:31.705 LINK scheduler 00:04:31.705 CC test/nvme/overhead/overhead.o 00:04:31.705 CC test/nvme/connect_stress/connect_stress.o 00:04:31.705 LINK lsvmd 00:04:31.705 CC test/nvme/aer/aer.o 00:04:31.705 CC test/nvme/reserve/reserve.o 00:04:31.705 CC test/nvme/startup/startup.o 00:04:31.705 CC test/nvme/e2edp/nvme_dp.o 00:04:31.705 CC test/nvme/compliance/nvme_compliance.o 00:04:31.705 CC test/nvme/cuse/cuse.o 00:04:31.705 CC test/nvme/reset/reset.o 00:04:31.705 CC test/nvme/boot_partition/boot_partition.o 00:04:31.705 CC test/nvme/fused_ordering/fused_ordering.o 00:04:31.705 CC test/nvme/sgl/sgl.o 00:04:31.705 CC test/nvme/err_injection/err_injection.o 00:04:31.705 CC test/nvme/fdp/fdp.o 00:04:31.705 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:31.705 CC test/nvme/simple_copy/simple_copy.o 00:04:31.705 LINK led 00:04:31.705 CC test/accel/dif/dif.o 00:04:31.705 CC test/blobfs/mkfs/mkfs.o 00:04:31.705 LINK hello_sock 00:04:31.966 LINK thread 00:04:31.966 CC test/lvol/esnap/esnap.o 00:04:31.966 LINK memory_ut 00:04:31.966 LINK idxd_perf 00:04:31.966 LINK connect_stress 00:04:31.966 LINK startup 00:04:31.966 LINK boot_partition 00:04:31.966 LINK reserve 00:04:31.966 LINK err_injection 00:04:31.966 LINK doorbell_aers 00:04:31.966 LINK fused_ordering 00:04:31.966 LINK aer 00:04:31.966 LINK reset 00:04:31.966 LINK overhead 00:04:31.966 LINK simple_copy 00:04:31.966 LINK sgl 00:04:31.966 LINK mkfs 00:04:31.966 LINK nvme_dp 00:04:31.966 LINK nvme_compliance 00:04:31.966 LINK fdp 00:04:32.227 LINK iscsi_fuzz 00:04:32.227 CC examples/nvme/hotplug/hotplug.o 00:04:32.227 CC examples/nvme/reconnect/reconnect.o 00:04:32.227 CC examples/nvme/abort/abort.o 00:04:32.227 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.227 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:32.227 CC examples/nvme/arbitration/arbitration.o 00:04:32.227 CC examples/nvme/hello_world/hello_world.o 00:04:32.227 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:32.487 LINK dif 00:04:32.487 CC examples/accel/perf/accel_perf.o 00:04:32.487 CC examples/blob/cli/blobcli.o 00:04:32.487 CC examples/blob/hello_world/hello_blob.o 00:04:32.487 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:32.487 LINK pmr_persistence 00:04:32.487 LINK cmb_copy 00:04:32.748 LINK hello_world 00:04:32.748 LINK hotplug 00:04:32.748 LINK reconnect 00:04:32.748 LINK abort 00:04:32.748 LINK arbitration 00:04:32.748 LINK hello_blob 00:04:32.748 LINK hello_fsdev 00:04:32.748 LINK nvme_manage 00:04:33.009 LINK accel_perf 00:04:33.009 LINK cuse 00:04:33.009 LINK blobcli 00:04:33.009 CC test/bdev/bdevio/bdevio.o 00:04:33.584 LINK bdevio 00:04:33.584 CC examples/bdev/hello_world/hello_bdev.o 00:04:33.584 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.846 LINK hello_bdev 00:04:34.418 LINK bdevperf 00:04:34.990 CC examples/nvmf/nvmf/nvmf.o 00:04:35.250 LINK nvmf 00:04:36.192 LINK esnap 00:04:36.453 00:04:36.453 real 0m54.731s 00:04:36.453 user 6m38.405s 00:04:36.453 sys 4m24.918s 00:04:36.453 12:35:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:36.453 12:35:06 make -- common/autotest_common.sh@10 -- $ set +x 00:04:36.453 ************************************ 00:04:36.453 END TEST make 00:04:36.453 ************************************ 00:04:36.713 12:35:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:36.713 12:35:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:36.713 12:35:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:36.713 12:35:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.713 12:35:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:36.713 12:35:06 -- pm/common@44 -- $ pid=3040646 00:04:36.713 12:35:06 -- pm/common@50 -- $ kill -TERM 3040646 00:04:36.713 12:35:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.713 12:35:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:36.713 12:35:06 -- pm/common@44 -- $ pid=3040647 00:04:36.713 12:35:06 -- pm/common@50 -- $ kill -TERM 3040647 00:04:36.713 12:35:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.713 12:35:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:36.713 12:35:06 -- pm/common@44 -- $ pid=3040649 00:04:36.713 12:35:06 -- pm/common@50 -- $ kill -TERM 3040649 00:04:36.713 12:35:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.713 12:35:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:36.713 12:35:06 -- pm/common@44 -- $ pid=3040672 00:04:36.713 12:35:06 -- pm/common@50 -- $ sudo -E kill -TERM 3040672 00:04:36.713 12:35:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:36.713 12:35:06 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:36.713 12:35:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.713 12:35:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.713 12:35:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.974 12:35:06 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.974 12:35:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.974 12:35:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.974 12:35:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.974 12:35:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.974 12:35:06 -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.974 12:35:06 -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.974 12:35:06 -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.974 12:35:06 -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.974 12:35:06 -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.974 12:35:06 -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.974 12:35:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.974 12:35:06 -- scripts/common.sh@344 -- # case "$op" in 00:04:36.974 12:35:06 -- scripts/common.sh@345 -- # : 1 00:04:36.974 12:35:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.974 12:35:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.974 12:35:06 -- scripts/common.sh@365 -- # decimal 1 00:04:36.974 12:35:06 -- scripts/common.sh@353 -- # local d=1 00:04:36.974 12:35:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.974 12:35:06 -- scripts/common.sh@355 -- # echo 1 00:04:36.974 12:35:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.974 12:35:06 -- scripts/common.sh@366 -- # decimal 2 00:04:36.974 12:35:06 -- scripts/common.sh@353 -- # local d=2 00:04:36.974 12:35:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.974 12:35:06 -- scripts/common.sh@355 -- # echo 2 00:04:36.974 12:35:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.974 12:35:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.974 12:35:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.974 12:35:06 -- scripts/common.sh@368 -- # return 0 00:04:36.974 12:35:06 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.974 12:35:06 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.974 --rc genhtml_branch_coverage=1 00:04:36.974 --rc genhtml_function_coverage=1 00:04:36.974 --rc genhtml_legend=1 00:04:36.974 --rc geninfo_all_blocks=1 00:04:36.974 --rc geninfo_unexecuted_blocks=1 00:04:36.974 00:04:36.974 ' 00:04:36.974 12:35:06 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.974 --rc genhtml_branch_coverage=1 00:04:36.974 --rc genhtml_function_coverage=1 00:04:36.974 --rc genhtml_legend=1 00:04:36.974 --rc geninfo_all_blocks=1 00:04:36.974 --rc geninfo_unexecuted_blocks=1 00:04:36.974 00:04:36.974 ' 00:04:36.975 12:35:06 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.975 --rc genhtml_branch_coverage=1 00:04:36.975 --rc genhtml_function_coverage=1 00:04:36.975 --rc genhtml_legend=1 00:04:36.975 --rc geninfo_all_blocks=1 00:04:36.975 --rc geninfo_unexecuted_blocks=1 00:04:36.975 00:04:36.975 ' 00:04:36.975 12:35:06 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.975 --rc genhtml_branch_coverage=1 00:04:36.975 --rc genhtml_function_coverage=1 00:04:36.975 --rc genhtml_legend=1 00:04:36.975 --rc geninfo_all_blocks=1 00:04:36.975 --rc geninfo_unexecuted_blocks=1 00:04:36.975 00:04:36.975 ' 00:04:36.975 12:35:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.975 12:35:06 -- nvmf/common.sh@7 -- # uname -s 00:04:36.975 12:35:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.975 12:35:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.975 12:35:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.975 12:35:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.975 12:35:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.975 12:35:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.975 12:35:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.975 12:35:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.975 12:35:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.975 12:35:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.975 12:35:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:36.975 12:35:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:36.975 12:35:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.975 12:35:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.975 12:35:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:36.975 12:35:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.975 12:35:06 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.975 12:35:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.975 12:35:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.975 12:35:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.975 12:35:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.975 12:35:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.975 12:35:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.975 12:35:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.975 12:35:06 -- paths/export.sh@5 -- # export PATH 00:04:36.975 12:35:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.975 12:35:06 -- nvmf/common.sh@51 -- # : 0 00:04:36.975 12:35:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.975 12:35:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.975 12:35:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.975 12:35:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.975 12:35:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.975 12:35:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.975 12:35:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.975 12:35:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.975 12:35:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.975 12:35:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:36.975 12:35:06 -- spdk/autotest.sh@32 -- # uname -s 00:04:36.975 12:35:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:36.975 12:35:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:36.975 12:35:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:36.975 12:35:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:36.975 12:35:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:36.975 12:35:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:36.975 12:35:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:36.975 12:35:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:36.975 12:35:06 -- spdk/autotest.sh@48 -- # udevadm_pid=3123675 00:04:36.975 12:35:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:36.975 12:35:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:36.975 12:35:06 -- pm/common@17 -- # local monitor 00:04:36.975 12:35:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.975 12:35:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.975 12:35:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.975 12:35:06 -- pm/common@21 -- # date +%s 00:04:36.975 12:35:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.975 12:35:06 -- pm/common@21 -- # date +%s 00:04:36.975 12:35:06 -- pm/common@25 -- # sleep 1 00:04:36.975 12:35:06 -- pm/common@21 -- # date +%s 00:04:36.975 12:35:06 -- pm/common@21 -- # date +%s 00:04:36.975 12:35:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793706 00:04:36.975 12:35:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793706 00:04:36.975 12:35:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793706 00:04:36.975 12:35:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793706 00:04:36.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793706_collect-vmstat.pm.log 00:04:36.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793706_collect-cpu-load.pm.log 00:04:36.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793706_collect-cpu-temp.pm.log 00:04:36.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793706_collect-bmc-pm.bmc.pm.log 00:04:37.916 12:35:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:37.916 12:35:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:37.916 12:35:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.916 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:37.916 12:35:07 -- spdk/autotest.sh@59 -- # create_test_list 00:04:37.916 12:35:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:37.916 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:37.916 12:35:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:37.916 12:35:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.916 12:35:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.916 12:35:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:37.916 12:35:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.916 12:35:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:37.916 12:35:07 -- common/autotest_common.sh@1457 -- # uname 00:04:37.916 12:35:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:37.916 12:35:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:37.916 12:35:08 -- common/autotest_common.sh@1477 -- # uname 00:04:37.916 12:35:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:37.916 12:35:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:37.916 12:35:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:38.177 lcov: LCOV version 1.15 00:04:38.177 12:35:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:53.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:53.093 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.310 12:35:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:11.310 12:35:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.310 12:35:38 -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 12:35:38 -- spdk/autotest.sh@78 -- # rm -f 00:05:11.310 12:35:38 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.882 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:11.882 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:11.882 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:11.882 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:12.142 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:12.143 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:12.143 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:12.403 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:12.403 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:12.403 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:12.664 12:35:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:12.664 12:35:42 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:12.664 12:35:42 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:12.664 12:35:42 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:12.664 12:35:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:12.664 12:35:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:12.664 12:35:42 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:12.664 12:35:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.664 12:35:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.664 12:35:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:12.664 12:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.664 12:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.664 12:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:12.664 12:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:12.664 12:35:42 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:12.664 No valid GPT data, bailing 00:05:12.664 12:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.664 12:35:42 -- scripts/common.sh@394 -- # pt= 00:05:12.664 12:35:42 -- scripts/common.sh@395 -- # return 1 00:05:12.664 12:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:12.664 1+0 records in 00:05:12.664 1+0 records out 00:05:12.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00194649 s, 539 MB/s 00:05:12.664 12:35:42 -- spdk/autotest.sh@105 -- # sync 00:05:12.664 12:35:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:12.664 12:35:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:12.664 12:35:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:22.663 12:35:51 -- spdk/autotest.sh@111 -- # uname -s 00:05:22.663 12:35:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:22.663 12:35:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:22.663 12:35:51 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:25.211 Hugepages 00:05:25.212 node hugesize free / total 00:05:25.212 node0 1048576kB 0 / 0 00:05:25.212 node0 2048kB 0 / 0 00:05:25.212 node1 1048576kB 0 / 0 00:05:25.212 node1 2048kB 0 / 0 00:05:25.212 00:05:25.212 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.212 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:25.212 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:25.212 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:25.212 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:25.212 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:25.212 12:35:54 -- spdk/autotest.sh@117 -- # uname -s 00:05:25.212 12:35:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:25.212 12:35:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:25.212 12:35:55 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:28.517 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:28.517 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:28.778 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:28.778 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:28.778 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:30.698 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:30.698 12:36:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:31.643 12:36:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:31.643 12:36:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:31.643 12:36:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.643 12:36:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:31.643 12:36:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.643 12:36:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.643 12:36:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.905 12:36:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.905 12:36:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.905 12:36:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:31.905 12:36:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:31.905 12:36:01 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:35.211 Waiting for block devices as requested 00:05:35.211 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:35.472 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:35.472 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:35.472 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:35.734 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:35.734 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:35.734 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:35.995 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:35.995 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:36.257 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:36.257 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:36.257 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:36.517 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:36.517 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:36.517 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:36.778 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:36.778 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:37.039 12:36:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.039 12:36:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:37.039 12:36:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:37.039 12:36:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:37.039 12:36:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.039 12:36:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.039 12:36:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:37.039 12:36:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.039 12:36:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.039 12:36:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:37.039 12:36:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.039 12:36:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.039 12:36:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.039 12:36:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.039 12:36:07 -- common/autotest_common.sh@1543 -- # continue 00:05:37.039 12:36:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:37.039 12:36:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.039 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.301 12:36:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:37.301 12:36:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.301 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.301 12:36:07 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:40.604 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.604 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.864 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:41.436 12:36:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:41.436 12:36:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.436 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.436 12:36:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:41.436 12:36:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:41.436 12:36:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:41.436 12:36:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:41.436 12:36:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:41.436 12:36:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:41.436 12:36:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:41.436 12:36:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:41.436 12:36:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:41.436 12:36:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:41.436 12:36:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.436 12:36:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:41.436 12:36:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:41.436 12:36:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:41.436 12:36:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:41.436 12:36:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:41.436 12:36:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:41.436 12:36:11 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:41.436 12:36:11 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:41.436 12:36:11 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:41.436 12:36:11 -- common/autotest_common.sh@1572 -- # return 0 00:05:41.436 12:36:11 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:41.436 12:36:11 -- common/autotest_common.sh@1580 -- # return 0 00:05:41.436 12:36:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:41.436 12:36:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:41.436 12:36:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.436 12:36:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.436 12:36:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:41.436 12:36:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.436 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.436 12:36:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:41.436 12:36:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:41.436 12:36:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.436 12:36:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.436 12:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.436 ************************************ 00:05:41.436 START TEST env 00:05:41.436 ************************************ 00:05:41.436 12:36:11 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:41.697 * Looking for test storage... 00:05:41.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.697 12:36:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.697 12:36:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.697 12:36:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.697 12:36:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.697 12:36:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.697 12:36:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.697 12:36:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.697 12:36:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.697 12:36:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.697 12:36:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.697 12:36:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.697 12:36:11 env -- scripts/common.sh@344 -- # case "$op" in 00:05:41.697 12:36:11 env -- scripts/common.sh@345 -- # : 1 00:05:41.697 12:36:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.697 12:36:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.697 12:36:11 env -- scripts/common.sh@365 -- # decimal 1 00:05:41.697 12:36:11 env -- scripts/common.sh@353 -- # local d=1 00:05:41.697 12:36:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.697 12:36:11 env -- scripts/common.sh@355 -- # echo 1 00:05:41.697 12:36:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.697 12:36:11 env -- scripts/common.sh@366 -- # decimal 2 00:05:41.697 12:36:11 env -- scripts/common.sh@353 -- # local d=2 00:05:41.697 12:36:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.697 12:36:11 env -- scripts/common.sh@355 -- # echo 2 00:05:41.697 12:36:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.697 12:36:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.697 12:36:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.697 12:36:11 env -- scripts/common.sh@368 -- # return 0 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 12:36:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.697 12:36:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.697 12:36:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.697 ************************************ 00:05:41.697 START TEST env_memory 00:05:41.697 ************************************ 00:05:41.697 12:36:11 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:41.697 00:05:41.697 00:05:41.697 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.697 http://cunit.sourceforge.net/ 00:05:41.697 00:05:41.697 00:05:41.697 Suite: memory 00:05:41.959 Test: alloc and free memory map ...[2024-11-28 12:36:11.828580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:41.959 passed 00:05:41.959 Test: mem map translation ...[2024-11-28 12:36:11.854112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.959 [2024-11-28 12:36:11.854142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.959 [2024-11-28 12:36:11.854195] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.959 [2024-11-28 12:36:11.854202] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:41.959 passed 00:05:41.959 Test: mem map registration ...[2024-11-28 12:36:11.909483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:41.959 [2024-11-28 12:36:11.909506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:41.959 passed 00:05:41.959 Test: mem map adjacent registrations ...passed 00:05:41.959 00:05:41.959 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.959 suites 1 1 n/a 0 0 00:05:41.959 tests 4 4 4 0 0 00:05:41.959 asserts 152 152 152 0 n/a 00:05:41.959 00:05:41.959 Elapsed time = 0.193 seconds 00:05:41.959 00:05:41.959 real 0m0.208s 00:05:41.959 user 0m0.198s 00:05:41.959 sys 0m0.009s 00:05:41.959 12:36:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.959 12:36:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:41.959 ************************************ 00:05:41.959 END TEST env_memory 00:05:41.959 ************************************ 00:05:41.959 12:36:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:41.959 12:36:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.959 12:36:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.959 12:36:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.959 ************************************ 00:05:41.959 START TEST env_vtophys 00:05:41.959 ************************************ 00:05:41.959 12:36:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.221 EAL: lib.eal log level changed from notice to debug 00:05:42.221 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.221 EAL: Detected lcore 1 as core 1 on socket 0 00:05:42.221 EAL: Detected lcore 2 as core 2 on socket 0 00:05:42.221 EAL: Detected lcore 3 as core 3 on socket 0 00:05:42.221 EAL: Detected lcore 4 as core 4 on socket 0 00:05:42.221 EAL: Detected lcore 5 as core 5 on socket 0 00:05:42.221 EAL: Detected lcore 6 as core 6 on socket 0 00:05:42.221 EAL: Detected lcore 7 as core 7 on socket 0 00:05:42.221 EAL: Detected lcore 8 as core 8 on socket 0 00:05:42.221 EAL: Detected lcore 9 as core 9 on socket 0 00:05:42.221 EAL: Detected lcore 10 as core 10 on socket 0 00:05:42.222 EAL: Detected lcore 11 as core 11 on socket 0 00:05:42.222 EAL: Detected lcore 12 as core 12 on socket 0 00:05:42.222 EAL: Detected lcore 13 as core 13 on socket 0 00:05:42.222 EAL: Detected lcore 14 as core 14 on socket 0 00:05:42.222 EAL: Detected lcore 15 as core 15 on socket 0 00:05:42.222 EAL: Detected lcore 16 as core 16 on socket 0 00:05:42.222 EAL: Detected lcore 17 as core 17 on socket 0 00:05:42.222 EAL: Detected lcore 18 as core 18 on socket 0 00:05:42.222 EAL: Detected lcore 19 as core 19 on socket 0 00:05:42.222 EAL: Detected lcore 20 as core 20 on socket 0 00:05:42.222 EAL: Detected lcore 21 as core 21 on socket 0 00:05:42.222 EAL: Detected lcore 22 as core 22 on socket 0 00:05:42.222 EAL: Detected lcore 23 as core 23 on socket 0 00:05:42.222 EAL: Detected lcore 24 as core 24 on socket 0 00:05:42.222 EAL: Detected lcore 25 as core 25 on socket 0 00:05:42.222 EAL: Detected lcore 26 as core 26 on socket 0 00:05:42.222 EAL: Detected lcore 27 as core 27 on socket 0 00:05:42.222 EAL: Detected lcore 28 as core 28 on socket 0 00:05:42.222 EAL: Detected lcore 29 as core 29 on socket 0 00:05:42.222 EAL: Detected lcore 30 as core 30 on socket 0 00:05:42.222 EAL: Detected lcore 31 as core 31 on socket 0 00:05:42.222 EAL: Detected lcore 32 as core 32 on socket 0 00:05:42.222 EAL: Detected lcore 33 as core 33 on socket 0 00:05:42.222 EAL: Detected lcore 34 as core 34 on socket 0 00:05:42.222 EAL: Detected lcore 35 as core 35 on socket 0 00:05:42.222 EAL: Detected lcore 36 as core 0 on socket 1 00:05:42.222 EAL: Detected lcore 37 as core 1 on socket 1 00:05:42.222 EAL: Detected lcore 38 as core 2 on socket 1 00:05:42.222 EAL: Detected lcore 39 as core 3 on socket 1 00:05:42.222 EAL: Detected lcore 40 as core 4 on socket 1 00:05:42.222 EAL: Detected lcore 41 as core 5 on socket 1 00:05:42.222 EAL: Detected lcore 42 as core 6 on socket 1 00:05:42.222 EAL: Detected lcore 43 as core 7 on socket 1 00:05:42.222 EAL: Detected lcore 44 as core 8 on socket 1 00:05:42.222 EAL: Detected lcore 45 as core 9 on socket 1 00:05:42.222 EAL: Detected lcore 46 as core 10 on socket 1 00:05:42.222 EAL: Detected lcore 47 as core 11 on socket 1 00:05:42.222 EAL: Detected lcore 48 as core 12 on socket 1 00:05:42.222 EAL: Detected lcore 49 as core 13 on socket 1 00:05:42.222 EAL: Detected lcore 50 as core 14 on socket 1 00:05:42.222 EAL: Detected lcore 51 as core 15 on socket 1 00:05:42.222 EAL: Detected lcore 52 as core 16 on socket 1 00:05:42.222 EAL: Detected lcore 53 as core 17 on socket 1 00:05:42.222 EAL: Detected lcore 54 as core 18 on socket 1 00:05:42.222 EAL: Detected lcore 55 as core 19 on socket 1 00:05:42.222 EAL: Detected lcore 56 as core 20 on socket 1 00:05:42.222 EAL: Detected lcore 57 as core 21 on socket 1 00:05:42.222 EAL: Detected lcore 58 as core 22 on socket 1 00:05:42.222 EAL: Detected lcore 59 as core 23 on socket 1 00:05:42.222 EAL: Detected lcore 60 as core 24 on socket 1 00:05:42.222 EAL: Detected lcore 61 as core 25 on socket 1 00:05:42.222 EAL: Detected lcore 62 as core 26 on socket 1 00:05:42.222 EAL: Detected lcore 63 as core 27 on socket 1 00:05:42.222 EAL: Detected lcore 64 as core 28 on socket 1 00:05:42.222 EAL: Detected lcore 65 as core 29 on socket 1 00:05:42.222 EAL: Detected lcore 66 as core 30 on socket 1 00:05:42.222 EAL: Detected lcore 67 as core 31 on socket 1 00:05:42.222 EAL: Detected lcore 68 as core 32 on socket 1 00:05:42.222 EAL: Detected lcore 69 as core 33 on socket 1 00:05:42.222 EAL: Detected lcore 70 as core 34 on socket 1 00:05:42.222 EAL: Detected lcore 71 as core 35 on socket 1 00:05:42.222 EAL: Detected lcore 72 as core 0 on socket 0 00:05:42.222 EAL: Detected lcore 73 as core 1 on socket 0 00:05:42.222 EAL: Detected lcore 74 as core 2 on socket 0 00:05:42.222 EAL: Detected lcore 75 as core 3 on socket 0 00:05:42.222 EAL: Detected lcore 76 as core 4 on socket 0 00:05:42.222 EAL: Detected lcore 77 as core 5 on socket 0 00:05:42.222 EAL: Detected lcore 78 as core 6 on socket 0 00:05:42.222 EAL: Detected lcore 79 as core 7 on socket 0 00:05:42.222 EAL: Detected lcore 80 as core 8 on socket 0 00:05:42.222 EAL: Detected lcore 81 as core 9 on socket 0 00:05:42.222 EAL: Detected lcore 82 as core 10 on socket 0 00:05:42.222 EAL: Detected lcore 83 as core 11 on socket 0 00:05:42.222 EAL: Detected lcore 84 as core 12 on socket 0 00:05:42.222 EAL: Detected lcore 85 as core 13 on socket 0 00:05:42.222 EAL: Detected lcore 86 as core 14 on socket 0 00:05:42.222 EAL: Detected lcore 87 as core 15 on socket 0 00:05:42.222 EAL: Detected lcore 88 as core 16 on socket 0 00:05:42.222 EAL: Detected lcore 89 as core 17 on socket 0 00:05:42.222 EAL: Detected lcore 90 as core 18 on socket 0 00:05:42.222 EAL: Detected lcore 91 as core 19 on socket 0 00:05:42.222 EAL: Detected lcore 92 as core 20 on socket 0 00:05:42.222 EAL: Detected lcore 93 as core 21 on socket 0 00:05:42.222 EAL: Detected lcore 94 as core 22 on socket 0 00:05:42.222 EAL: Detected lcore 95 as core 23 on socket 0 00:05:42.222 EAL: Detected lcore 96 as core 24 on socket 0 00:05:42.222 EAL: Detected lcore 97 as core 25 on socket 0 00:05:42.222 EAL: Detected lcore 98 as core 26 on socket 0 00:05:42.222 EAL: Detected lcore 99 as core 27 on socket 0 00:05:42.222 EAL: Detected lcore 100 as core 28 on socket 0 00:05:42.222 EAL: Detected lcore 101 as core 29 on socket 0 00:05:42.222 EAL: Detected lcore 102 as core 30 on socket 0 00:05:42.222 EAL: Detected lcore 103 as core 31 on socket 0 00:05:42.222 EAL: Detected lcore 104 as core 32 on socket 0 00:05:42.222 EAL: Detected lcore 105 as core 33 on socket 0 00:05:42.222 EAL: Detected lcore 106 as core 34 on socket 0 00:05:42.222 EAL: Detected lcore 107 as core 35 on socket 0 00:05:42.222 EAL: Detected lcore 108 as core 0 on socket 1 00:05:42.222 EAL: Detected lcore 109 as core 1 on socket 1 00:05:42.222 EAL: Detected lcore 110 as core 2 on socket 1 00:05:42.222 EAL: Detected lcore 111 as core 3 on socket 1 00:05:42.222 EAL: Detected lcore 112 as core 4 on socket 1 00:05:42.222 EAL: Detected lcore 113 as core 5 on socket 1 00:05:42.222 EAL: Detected lcore 114 as core 6 on socket 1 00:05:42.222 EAL: Detected lcore 115 as core 7 on socket 1 00:05:42.222 EAL: Detected lcore 116 as core 8 on socket 1 00:05:42.222 EAL: Detected lcore 117 as core 9 on socket 1 00:05:42.223 EAL: Detected lcore 118 as core 10 on socket 1 00:05:42.223 EAL: Detected lcore 119 as core 11 on socket 1 00:05:42.223 EAL: Detected lcore 120 as core 12 on socket 1 00:05:42.223 EAL: Detected lcore 121 as core 13 on socket 1 00:05:42.223 EAL: Detected lcore 122 as core 14 on socket 1 00:05:42.223 EAL: Detected lcore 123 as core 15 on socket 1 00:05:42.223 EAL: Detected lcore 124 as core 16 on socket 1 00:05:42.223 EAL: Detected lcore 125 as core 17 on socket 1 00:05:42.223 EAL: Detected lcore 126 as core 18 on socket 1 00:05:42.223 EAL: Detected lcore 127 as core 19 on socket 1 00:05:42.223 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:42.223 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:42.223 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:42.223 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:42.223 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:42.223 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:42.223 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:42.223 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:42.223 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:42.223 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:42.223 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:42.223 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:42.223 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:42.223 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:42.223 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:42.223 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:42.223 EAL: Maximum logical cores by configuration: 128 00:05:42.223 EAL: Detected CPU lcores: 128 00:05:42.223 EAL: Detected NUMA nodes: 2 00:05:42.223 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:42.223 EAL: Detected shared linkage of DPDK 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:42.223 EAL: Registered [vdev] bus. 00:05:42.223 EAL: bus.vdev log level changed from disabled to notice 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:42.223 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:42.223 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:05:42.223 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:05:42.223 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.223 EAL: No shared files mode enabled, IPC is disabled 00:05:42.223 EAL: Bus pci wants IOVA as 'DC' 00:05:42.223 EAL: Bus vdev wants IOVA as 'DC' 00:05:42.223 EAL: Buses did not request a specific IOVA mode. 00:05:42.223 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:42.223 EAL: Selected IOVA mode 'VA' 00:05:42.223 EAL: Probing VFIO support... 00:05:42.223 EAL: No shared files mode enabled, IPC is disabled 00:05:42.223 EAL: IOMMU type 1 (Type 1) is supported 00:05:42.223 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:42.223 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:42.223 EAL: VFIO support initialized 00:05:42.223 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.223 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.223 EAL: Setting up physically contiguous memory... 00:05:42.223 EAL: Setting maximum number of open files to 524288 00:05:42.223 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.223 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:42.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.223 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.223 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.223 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.223 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.223 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.223 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.223 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.223 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.223 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.223 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.223 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.223 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.223 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.223 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.223 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.223 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.223 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.223 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.223 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.223 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:42.223 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.223 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:42.223 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.223 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.223 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:42.224 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:42.224 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.224 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:42.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.224 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.224 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:42.224 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:42.224 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.224 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:42.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.224 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.224 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:42.224 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:42.224 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.224 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:42.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.224 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.224 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:42.224 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:42.224 EAL: Hugepages will be freed exactly as allocated. 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Refined arch frequency 2400000000 to measured frequency 2394381667 00:05:42.224 EAL: TSC frequency is ~2394400 KHz 00:05:42.224 EAL: Main lcore 0 is ready (tid=7fb62bc4ca00;cpuset=[0]) 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 0 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.224 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.224 00:05:42.224 00:05:42.224 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.224 http://cunit.sourceforge.net/ 00:05:42.224 00:05:42.224 00:05:42.224 Suite: components_suite 00:05:42.224 Test: vtophys_malloc_test ...passed 00:05:42.224 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 4 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 4 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 4 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 4 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 4 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.224 EAL: Restoring previous memory policy: 4 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.224 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.224 EAL: request: mp_malloc_sync 00:05:42.224 EAL: No shared files mode enabled, IPC is disabled 00:05:42.224 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.224 EAL: Trying to obtain current memory policy. 00:05:42.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.485 EAL: Restoring previous memory policy: 4 00:05:42.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.485 EAL: request: mp_malloc_sync 00:05:42.485 EAL: No shared files mode enabled, IPC is disabled 00:05:42.485 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.485 EAL: request: mp_malloc_sync 00:05:42.485 EAL: No shared files mode enabled, IPC is disabled 00:05:42.485 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.485 EAL: Trying to obtain current memory policy. 00:05:42.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.485 EAL: Restoring previous memory policy: 4 00:05:42.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.485 EAL: request: mp_malloc_sync 00:05:42.485 EAL: No shared files mode enabled, IPC is disabled 00:05:42.485 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.485 EAL: request: mp_malloc_sync 00:05:42.485 EAL: No shared files mode enabled, IPC is disabled 00:05:42.485 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.485 EAL: Trying to obtain current memory policy. 00:05:42.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.485 EAL: Restoring previous memory policy: 4 00:05:42.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.485 EAL: request: mp_malloc_sync 00:05:42.485 EAL: No shared files mode enabled, IPC is disabled 00:05:42.485 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.746 EAL: request: mp_malloc_sync 00:05:42.746 EAL: No shared files mode enabled, IPC is disabled 00:05:42.746 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.746 EAL: Trying to obtain current memory policy. 00:05:42.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.746 EAL: Restoring previous memory policy: 4 00:05:42.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.747 EAL: request: mp_malloc_sync 00:05:42.747 EAL: No shared files mode enabled, IPC is disabled 00:05:42.747 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.008 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.008 EAL: request: mp_malloc_sync 00:05:43.008 EAL: No shared files mode enabled, IPC is disabled 00:05:43.008 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.008 passed 00:05:43.008 00:05:43.008 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.008 suites 1 1 n/a 0 0 00:05:43.008 tests 2 2 2 0 0 00:05:43.008 asserts 497 497 497 0 n/a 00:05:43.008 00:05:43.008 Elapsed time = 0.688 seconds 00:05:43.008 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.008 EAL: request: mp_malloc_sync 00:05:43.008 EAL: No shared files mode enabled, IPC is disabled 00:05:43.008 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.008 EAL: No shared files mode enabled, IPC is disabled 00:05:43.008 EAL: No shared files mode enabled, IPC is disabled 00:05:43.008 EAL: No shared files mode enabled, IPC is disabled 00:05:43.008 00:05:43.008 real 0m0.943s 00:05:43.008 user 0m0.450s 00:05:43.008 sys 0m0.361s 00:05:43.008 12:36:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.008 12:36:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.008 ************************************ 00:05:43.008 END TEST env_vtophys 00:05:43.008 ************************************ 00:05:43.008 12:36:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.008 12:36:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.008 12:36:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.008 12:36:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.008 ************************************ 00:05:43.008 START TEST env_pci 00:05:43.008 ************************************ 00:05:43.008 12:36:13 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.008 00:05:43.008 00:05:43.008 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.008 http://cunit.sourceforge.net/ 00:05:43.008 00:05:43.008 00:05:43.008 Suite: pci 00:05:43.008 Test: pci_hook ...[2024-11-28 12:36:13.104760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3143779 has claimed it 00:05:43.269 EAL: Cannot find device (10000:00:01.0) 00:05:43.269 EAL: Failed to attach device on primary process 00:05:43.269 passed 00:05:43.269 00:05:43.269 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.269 suites 1 1 n/a 0 0 00:05:43.269 tests 1 1 1 0 0 00:05:43.269 asserts 25 25 25 0 n/a 00:05:43.269 00:05:43.269 Elapsed time = 0.029 seconds 00:05:43.269 00:05:43.269 real 0m0.051s 00:05:43.269 user 0m0.015s 00:05:43.269 sys 0m0.036s 00:05:43.269 12:36:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.269 12:36:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.269 ************************************ 00:05:43.269 END TEST env_pci 00:05:43.269 ************************************ 00:05:43.269 12:36:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.269 12:36:13 env -- env/env.sh@15 -- # uname 00:05:43.269 12:36:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.269 12:36:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.269 12:36:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.269 12:36:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:43.269 12:36:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.269 12:36:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.269 ************************************ 00:05:43.269 START TEST env_dpdk_post_init 00:05:43.269 ************************************ 00:05:43.269 12:36:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.269 EAL: Detected CPU lcores: 128 00:05:43.269 EAL: Detected NUMA nodes: 2 00:05:43.269 EAL: Detected shared linkage of DPDK 00:05:43.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.269 EAL: Selected IOVA mode 'VA' 00:05:43.269 EAL: VFIO support initialized 00:05:43.530 EAL: Using IOMMU type 1 (Type 1) 00:05:47.732 Starting DPDK initialization... 00:05:47.732 Starting SPDK post initialization... 00:05:47.732 SPDK NVMe probe 00:05:47.732 Attaching to 0000:65:00.0 00:05:47.732 Attached to 0000:65:00.0 00:05:47.732 Cleaning up... 00:05:49.119 00:05:49.119 real 0m5.845s 00:05:49.119 user 0m0.104s 00:05:49.119 sys 0m0.195s 00:05:49.119 12:36:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.119 12:36:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.119 ************************************ 00:05:49.119 END TEST env_dpdk_post_init 00:05:49.119 ************************************ 00:05:49.119 12:36:19 env -- env/env.sh@26 -- # uname 00:05:49.119 12:36:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:49.119 12:36:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.119 12:36:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.119 12:36:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.119 12:36:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.119 ************************************ 00:05:49.119 START TEST env_mem_callbacks 00:05:49.119 ************************************ 00:05:49.119 12:36:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.119 EAL: Detected CPU lcores: 128 00:05:49.119 EAL: Detected NUMA nodes: 2 00:05:49.119 EAL: Detected shared linkage of DPDK 00:05:49.119 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.119 EAL: Selected IOVA mode 'VA' 00:05:49.119 EAL: VFIO support initialized 00:05:49.380 00:05:49.380 00:05:49.380 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.380 http://cunit.sourceforge.net/ 00:05:49.380 00:05:49.380 00:05:49.380 Suite: memory 00:05:49.380 Test: test ... 00:05:49.380 register 0x200000200000 2097152 00:05:49.380 malloc 3145728 00:05:49.380 register 0x200000400000 4194304 00:05:49.380 buf 0x200000500000 len 3145728 PASSED 00:05:49.380 malloc 64 00:05:49.380 buf 0x2000004fff40 len 64 PASSED 00:05:49.380 malloc 4194304 00:05:49.380 register 0x200000800000 6291456 00:05:49.380 buf 0x200000a00000 len 4194304 PASSED 00:05:49.380 free 0x200000500000 3145728 00:05:49.380 free 0x2000004fff40 64 00:05:49.380 unregister 0x200000400000 4194304 PASSED 00:05:49.380 free 0x200000a00000 4194304 00:05:49.380 unregister 0x200000800000 6291456 PASSED 00:05:49.380 malloc 8388608 00:05:49.380 register 0x200000400000 10485760 00:05:49.380 buf 0x200000600000 len 8388608 PASSED 00:05:49.380 free 0x200000600000 8388608 00:05:49.380 unregister 0x200000400000 10485760 PASSED 00:05:49.380 passed 00:05:49.380 00:05:49.380 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.380 suites 1 1 n/a 0 0 00:05:49.380 tests 1 1 1 0 0 00:05:49.380 asserts 15 15 15 0 n/a 00:05:49.380 00:05:49.380 Elapsed time = 0.010 seconds 00:05:49.380 00:05:49.380 real 0m0.171s 00:05:49.380 user 0m0.026s 00:05:49.380 sys 0m0.045s 00:05:49.380 12:36:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.380 12:36:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:49.380 ************************************ 00:05:49.380 END TEST env_mem_callbacks 00:05:49.380 ************************************ 00:05:49.380 00:05:49.380 real 0m7.841s 00:05:49.380 user 0m1.073s 00:05:49.380 sys 0m1.024s 00:05:49.380 12:36:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.380 12:36:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.380 ************************************ 00:05:49.380 END TEST env 00:05:49.380 ************************************ 00:05:49.380 12:36:19 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:49.380 12:36:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.380 12:36:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.380 12:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.380 ************************************ 00:05:49.380 START TEST rpc 00:05:49.380 ************************************ 00:05:49.380 12:36:19 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:49.641 * Looking for test storage... 00:05:49.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.641 12:36:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.641 12:36:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.641 12:36:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.641 12:36:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.641 12:36:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.641 12:36:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.641 12:36:19 rpc -- scripts/common.sh@345 -- # : 1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.641 12:36:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.641 12:36:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.641 12:36:19 rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.641 12:36:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.641 12:36:19 rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.641 12:36:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.641 12:36:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.641 12:36:19 rpc -- scripts/common.sh@368 -- # return 0 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.641 --rc genhtml_branch_coverage=1 00:05:49.641 --rc genhtml_function_coverage=1 00:05:49.641 --rc genhtml_legend=1 00:05:49.641 --rc geninfo_all_blocks=1 00:05:49.641 --rc geninfo_unexecuted_blocks=1 00:05:49.641 00:05:49.641 ' 00:05:49.641 12:36:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.641 --rc genhtml_branch_coverage=1 00:05:49.641 --rc genhtml_function_coverage=1 00:05:49.641 --rc genhtml_legend=1 00:05:49.641 --rc geninfo_all_blocks=1 00:05:49.642 --rc geninfo_unexecuted_blocks=1 00:05:49.642 00:05:49.642 ' 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.642 --rc genhtml_branch_coverage=1 00:05:49.642 --rc genhtml_function_coverage=1 00:05:49.642 --rc genhtml_legend=1 00:05:49.642 --rc geninfo_all_blocks=1 00:05:49.642 --rc geninfo_unexecuted_blocks=1 00:05:49.642 00:05:49.642 ' 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.642 --rc genhtml_branch_coverage=1 00:05:49.642 --rc genhtml_function_coverage=1 00:05:49.642 --rc genhtml_legend=1 00:05:49.642 --rc geninfo_all_blocks=1 00:05:49.642 --rc geninfo_unexecuted_blocks=1 00:05:49.642 00:05:49.642 ' 00:05:49.642 12:36:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3145093 00:05:49.642 12:36:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.642 12:36:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3145093 00:05:49.642 12:36:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 3145093 ']' 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.642 12:36:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.642 [2024-11-28 12:36:19.730769] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:49.642 [2024-11-28 12:36:19.730839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145093 ] 00:05:49.904 [2024-11-28 12:36:19.868019] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.904 [2024-11-28 12:36:19.929420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.904 [2024-11-28 12:36:19.957078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:49.904 [2024-11-28 12:36:19.957131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3145093' to capture a snapshot of events at runtime. 00:05:49.904 [2024-11-28 12:36:19.957139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:49.904 [2024-11-28 12:36:19.957146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:49.904 [2024-11-28 12:36:19.957157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3145093 for offline analysis/debug. 00:05:49.904 [2024-11-28 12:36:19.957927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.475 12:36:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.475 12:36:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.475 12:36:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.475 12:36:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.475 12:36:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:50.475 12:36:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:50.475 12:36:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.475 12:36:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.475 12:36:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.475 ************************************ 00:05:50.475 START TEST rpc_integrity 00:05:50.475 ************************************ 00:05:50.475 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:50.475 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.475 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.475 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.475 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.475 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.737 { 00:05:50.737 "name": "Malloc0", 00:05:50.737 "aliases": [ 00:05:50.737 "05212ce8-5be1-4bfd-97cf-3e9665c69374" 00:05:50.737 ], 00:05:50.737 "product_name": "Malloc disk", 00:05:50.737 "block_size": 512, 00:05:50.737 "num_blocks": 16384, 00:05:50.737 "uuid": "05212ce8-5be1-4bfd-97cf-3e9665c69374", 00:05:50.737 "assigned_rate_limits": { 00:05:50.737 "rw_ios_per_sec": 0, 00:05:50.737 "rw_mbytes_per_sec": 0, 00:05:50.737 "r_mbytes_per_sec": 0, 00:05:50.737 "w_mbytes_per_sec": 0 00:05:50.737 }, 00:05:50.737 "claimed": false, 00:05:50.737 "zoned": false, 00:05:50.737 "supported_io_types": { 00:05:50.737 "read": true, 00:05:50.737 "write": true, 00:05:50.737 "unmap": true, 00:05:50.737 "flush": true, 00:05:50.737 "reset": true, 00:05:50.737 "nvme_admin": false, 00:05:50.737 "nvme_io": false, 00:05:50.737 "nvme_io_md": false, 00:05:50.737 "write_zeroes": true, 00:05:50.737 "zcopy": true, 00:05:50.737 "get_zone_info": false, 00:05:50.737 "zone_management": false, 00:05:50.737 "zone_append": false, 00:05:50.737 "compare": false, 00:05:50.737 "compare_and_write": false, 00:05:50.737 "abort": true, 00:05:50.737 "seek_hole": false, 00:05:50.737 "seek_data": false, 00:05:50.737 "copy": true, 00:05:50.737 "nvme_iov_md": false 00:05:50.737 }, 00:05:50.737 "memory_domains": [ 00:05:50.737 { 00:05:50.737 "dma_device_id": "system", 00:05:50.737 "dma_device_type": 1 00:05:50.737 }, 00:05:50.737 { 00:05:50.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.737 "dma_device_type": 2 00:05:50.737 } 00:05:50.737 ], 00:05:50.737 "driver_specific": {} 00:05:50.737 } 00:05:50.737 ]' 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 [2024-11-28 12:36:20.738336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:50.737 [2024-11-28 12:36:20.738388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.737 [2024-11-28 12:36:20.738409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1142c20 00:05:50.737 [2024-11-28 12:36:20.738417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.737 [2024-11-28 12:36:20.739961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.737 [2024-11-28 12:36:20.739997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:50.737 Passthru0 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:50.737 { 00:05:50.737 "name": "Malloc0", 00:05:50.737 "aliases": [ 00:05:50.737 "05212ce8-5be1-4bfd-97cf-3e9665c69374" 00:05:50.737 ], 00:05:50.737 "product_name": "Malloc disk", 00:05:50.737 "block_size": 512, 00:05:50.737 "num_blocks": 16384, 00:05:50.737 "uuid": "05212ce8-5be1-4bfd-97cf-3e9665c69374", 00:05:50.737 "assigned_rate_limits": { 00:05:50.737 "rw_ios_per_sec": 0, 00:05:50.737 "rw_mbytes_per_sec": 0, 00:05:50.737 "r_mbytes_per_sec": 0, 00:05:50.737 "w_mbytes_per_sec": 0 00:05:50.737 }, 00:05:50.737 "claimed": true, 00:05:50.737 "claim_type": "exclusive_write", 00:05:50.737 "zoned": false, 00:05:50.737 "supported_io_types": { 00:05:50.737 "read": true, 00:05:50.737 "write": true, 00:05:50.737 "unmap": true, 00:05:50.737 "flush": true, 00:05:50.737 "reset": true, 00:05:50.737 "nvme_admin": false, 00:05:50.737 "nvme_io": false, 00:05:50.737 "nvme_io_md": false, 00:05:50.737 "write_zeroes": true, 00:05:50.737 "zcopy": true, 00:05:50.737 "get_zone_info": false, 00:05:50.737 "zone_management": false, 00:05:50.737 "zone_append": false, 00:05:50.737 "compare": false, 00:05:50.737 "compare_and_write": false, 00:05:50.737 "abort": true, 00:05:50.737 "seek_hole": false, 00:05:50.737 "seek_data": false, 00:05:50.737 "copy": true, 00:05:50.737 "nvme_iov_md": false 00:05:50.737 }, 00:05:50.737 "memory_domains": [ 00:05:50.737 { 00:05:50.737 "dma_device_id": "system", 00:05:50.737 "dma_device_type": 1 00:05:50.737 }, 00:05:50.737 { 00:05:50.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.737 "dma_device_type": 2 00:05:50.737 } 00:05:50.737 ], 00:05:50.737 "driver_specific": {} 00:05:50.737 }, 00:05:50.737 { 00:05:50.737 "name": "Passthru0", 00:05:50.737 "aliases": [ 00:05:50.737 "db7616b4-ac14-5578-8687-da07082fb4eb" 00:05:50.737 ], 00:05:50.737 "product_name": "passthru", 00:05:50.737 "block_size": 512, 00:05:50.737 "num_blocks": 16384, 00:05:50.737 "uuid": "db7616b4-ac14-5578-8687-da07082fb4eb", 00:05:50.737 "assigned_rate_limits": { 00:05:50.737 "rw_ios_per_sec": 0, 00:05:50.737 "rw_mbytes_per_sec": 0, 00:05:50.737 "r_mbytes_per_sec": 0, 00:05:50.737 "w_mbytes_per_sec": 0 00:05:50.737 }, 00:05:50.737 "claimed": false, 00:05:50.737 "zoned": false, 00:05:50.737 "supported_io_types": { 00:05:50.737 "read": true, 00:05:50.737 "write": true, 00:05:50.737 "unmap": true, 00:05:50.737 "flush": true, 00:05:50.737 "reset": true, 00:05:50.737 "nvme_admin": false, 00:05:50.737 "nvme_io": false, 00:05:50.737 "nvme_io_md": false, 00:05:50.737 "write_zeroes": true, 00:05:50.737 "zcopy": true, 00:05:50.737 "get_zone_info": false, 00:05:50.737 "zone_management": false, 00:05:50.737 "zone_append": false, 00:05:50.737 "compare": false, 00:05:50.737 "compare_and_write": false, 00:05:50.737 "abort": true, 00:05:50.737 "seek_hole": false, 00:05:50.737 "seek_data": false, 00:05:50.737 "copy": true, 00:05:50.737 "nvme_iov_md": false 00:05:50.737 }, 00:05:50.737 "memory_domains": [ 00:05:50.737 { 00:05:50.737 "dma_device_id": "system", 00:05:50.737 "dma_device_type": 1 00:05:50.737 }, 00:05:50.737 { 00:05:50.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.737 "dma_device_type": 2 00:05:50.737 } 00:05:50.737 ], 00:05:50.737 "driver_specific": { 00:05:50.737 "passthru": { 00:05:50.737 "name": "Passthru0", 00:05:50.737 "base_bdev_name": "Malloc0" 00:05:50.737 } 00:05:50.737 } 00:05:50.737 } 00:05:50.737 ]' 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.737 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.737 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.738 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.998 12:36:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.998 00:05:50.998 real 0m0.318s 00:05:50.998 user 0m0.183s 00:05:50.998 sys 0m0.059s 00:05:50.998 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.998 12:36:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 ************************************ 00:05:50.998 END TEST rpc_integrity 00:05:50.998 ************************************ 00:05:50.998 12:36:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:50.998 12:36:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.998 12:36:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.998 12:36:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 ************************************ 00:05:50.998 START TEST rpc_plugins 00:05:50.998 ************************************ 00:05:50.998 12:36:20 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:50.998 12:36:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:50.998 12:36:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.998 12:36:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:50.998 { 00:05:50.998 "name": "Malloc1", 00:05:50.998 "aliases": [ 00:05:50.998 "b367bd9d-c215-4028-960c-e2610ad421dc" 00:05:50.998 ], 00:05:50.998 "product_name": "Malloc disk", 00:05:50.998 "block_size": 4096, 00:05:50.998 "num_blocks": 256, 00:05:50.998 "uuid": "b367bd9d-c215-4028-960c-e2610ad421dc", 00:05:50.998 "assigned_rate_limits": { 00:05:50.998 "rw_ios_per_sec": 0, 00:05:50.998 "rw_mbytes_per_sec": 0, 00:05:50.998 "r_mbytes_per_sec": 0, 00:05:50.998 "w_mbytes_per_sec": 0 00:05:50.998 }, 00:05:50.998 "claimed": false, 00:05:50.998 "zoned": false, 00:05:50.998 "supported_io_types": { 00:05:50.998 "read": true, 00:05:50.998 "write": true, 00:05:50.998 "unmap": true, 00:05:50.998 "flush": true, 00:05:50.998 "reset": true, 00:05:50.998 "nvme_admin": false, 00:05:50.998 "nvme_io": false, 00:05:50.998 "nvme_io_md": false, 00:05:50.998 "write_zeroes": true, 00:05:50.998 "zcopy": true, 00:05:50.998 "get_zone_info": false, 00:05:50.998 "zone_management": false, 00:05:50.998 "zone_append": false, 00:05:50.998 "compare": false, 00:05:50.998 "compare_and_write": false, 00:05:50.998 "abort": true, 00:05:50.998 "seek_hole": false, 00:05:50.998 "seek_data": false, 00:05:50.998 "copy": true, 00:05:50.998 "nvme_iov_md": false 00:05:50.998 }, 00:05:50.998 "memory_domains": [ 00:05:50.998 { 00:05:50.998 "dma_device_id": "system", 00:05:50.998 "dma_device_type": 1 00:05:50.998 }, 00:05:50.998 { 00:05:50.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.998 "dma_device_type": 2 00:05:50.998 } 00:05:50.998 ], 00:05:50.998 "driver_specific": {} 00:05:50.998 } 00:05:50.998 ]' 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.998 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.998 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:51.260 12:36:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:51.260 00:05:51.260 real 0m0.155s 00:05:51.260 user 0m0.096s 00:05:51.260 sys 0m0.022s 00:05:51.260 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.260 12:36:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.260 ************************************ 00:05:51.260 END TEST rpc_plugins 00:05:51.260 ************************************ 00:05:51.260 12:36:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:51.260 12:36:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.260 12:36:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.260 12:36:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.260 ************************************ 00:05:51.260 START TEST rpc_trace_cmd_test 00:05:51.260 ************************************ 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:51.260 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3145093", 00:05:51.260 "tpoint_group_mask": "0x8", 00:05:51.260 "iscsi_conn": { 00:05:51.260 "mask": "0x2", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "scsi": { 00:05:51.260 "mask": "0x4", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "bdev": { 00:05:51.260 "mask": "0x8", 00:05:51.260 "tpoint_mask": "0xffffffffffffffff" 00:05:51.260 }, 00:05:51.260 "nvmf_rdma": { 00:05:51.260 "mask": "0x10", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "nvmf_tcp": { 00:05:51.260 "mask": "0x20", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "ftl": { 00:05:51.260 "mask": "0x40", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "blobfs": { 00:05:51.260 "mask": "0x80", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "dsa": { 00:05:51.260 "mask": "0x200", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "thread": { 00:05:51.260 "mask": "0x400", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "nvme_pcie": { 00:05:51.260 "mask": "0x800", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "iaa": { 00:05:51.260 "mask": "0x1000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "nvme_tcp": { 00:05:51.260 "mask": "0x2000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "bdev_nvme": { 00:05:51.260 "mask": "0x4000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "sock": { 00:05:51.260 "mask": "0x8000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "blob": { 00:05:51.260 "mask": "0x10000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "bdev_raid": { 00:05:51.260 "mask": "0x20000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 }, 00:05:51.260 "scheduler": { 00:05:51.260 "mask": "0x40000", 00:05:51.260 "tpoint_mask": "0x0" 00:05:51.260 } 00:05:51.260 }' 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:51.260 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:51.521 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:51.521 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:51.521 12:36:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:51.521 00:05:51.521 real 0m0.237s 00:05:51.521 user 0m0.188s 00:05:51.521 sys 0m0.041s 00:05:51.521 12:36:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.521 12:36:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 ************************************ 00:05:51.521 END TEST rpc_trace_cmd_test 00:05:51.521 ************************************ 00:05:51.521 12:36:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:51.521 12:36:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:51.521 12:36:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:51.521 12:36:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.521 12:36:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.521 12:36:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 ************************************ 00:05:51.521 START TEST rpc_daemon_integrity 00:05:51.521 ************************************ 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.521 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.521 { 00:05:51.521 "name": "Malloc2", 00:05:51.521 "aliases": [ 00:05:51.521 "e4edd702-0d47-4b08-b3a3-a5a9b65e3081" 00:05:51.521 ], 00:05:51.521 "product_name": "Malloc disk", 00:05:51.521 "block_size": 512, 00:05:51.521 "num_blocks": 16384, 00:05:51.521 "uuid": "e4edd702-0d47-4b08-b3a3-a5a9b65e3081", 00:05:51.521 "assigned_rate_limits": { 00:05:51.521 "rw_ios_per_sec": 0, 00:05:51.521 "rw_mbytes_per_sec": 0, 00:05:51.521 "r_mbytes_per_sec": 0, 00:05:51.521 "w_mbytes_per_sec": 0 00:05:51.521 }, 00:05:51.521 "claimed": false, 00:05:51.521 "zoned": false, 00:05:51.521 "supported_io_types": { 00:05:51.521 "read": true, 00:05:51.521 "write": true, 00:05:51.522 "unmap": true, 00:05:51.522 "flush": true, 00:05:51.522 "reset": true, 00:05:51.522 "nvme_admin": false, 00:05:51.522 "nvme_io": false, 00:05:51.522 "nvme_io_md": false, 00:05:51.522 "write_zeroes": true, 00:05:51.522 "zcopy": true, 00:05:51.522 "get_zone_info": false, 00:05:51.522 "zone_management": false, 00:05:51.522 "zone_append": false, 00:05:51.522 "compare": false, 00:05:51.522 "compare_and_write": false, 00:05:51.522 "abort": true, 00:05:51.522 "seek_hole": false, 00:05:51.522 "seek_data": false, 00:05:51.522 "copy": true, 00:05:51.522 "nvme_iov_md": false 00:05:51.522 }, 00:05:51.522 "memory_domains": [ 00:05:51.522 { 00:05:51.522 "dma_device_id": "system", 00:05:51.522 "dma_device_type": 1 00:05:51.522 }, 00:05:51.522 { 00:05:51.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.522 "dma_device_type": 2 00:05:51.522 } 00:05:51.522 ], 00:05:51.522 "driver_specific": {} 00:05:51.522 } 00:05:51.522 ]' 00:05:51.522 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.783 [2024-11-28 12:36:21.690801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.783 [2024-11-28 12:36:21.690846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.783 [2024-11-28 12:36:21.690863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1146430 00:05:51.783 [2024-11-28 12:36:21.690871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.783 [2024-11-28 12:36:21.692338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.783 [2024-11-28 12:36:21.692373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.783 Passthru0 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.783 { 00:05:51.783 "name": "Malloc2", 00:05:51.783 "aliases": [ 00:05:51.783 "e4edd702-0d47-4b08-b3a3-a5a9b65e3081" 00:05:51.783 ], 00:05:51.783 "product_name": "Malloc disk", 00:05:51.783 "block_size": 512, 00:05:51.783 "num_blocks": 16384, 00:05:51.783 "uuid": "e4edd702-0d47-4b08-b3a3-a5a9b65e3081", 00:05:51.783 "assigned_rate_limits": { 00:05:51.783 "rw_ios_per_sec": 0, 00:05:51.783 "rw_mbytes_per_sec": 0, 00:05:51.783 "r_mbytes_per_sec": 0, 00:05:51.783 "w_mbytes_per_sec": 0 00:05:51.783 }, 00:05:51.783 "claimed": true, 00:05:51.783 "claim_type": "exclusive_write", 00:05:51.783 "zoned": false, 00:05:51.783 "supported_io_types": { 00:05:51.783 "read": true, 00:05:51.783 "write": true, 00:05:51.783 "unmap": true, 00:05:51.783 "flush": true, 00:05:51.783 "reset": true, 00:05:51.783 "nvme_admin": false, 00:05:51.783 "nvme_io": false, 00:05:51.783 "nvme_io_md": false, 00:05:51.783 "write_zeroes": true, 00:05:51.783 "zcopy": true, 00:05:51.783 "get_zone_info": false, 00:05:51.783 "zone_management": false, 00:05:51.783 "zone_append": false, 00:05:51.783 "compare": false, 00:05:51.783 "compare_and_write": false, 00:05:51.783 "abort": true, 00:05:51.783 "seek_hole": false, 00:05:51.783 "seek_data": false, 00:05:51.783 "copy": true, 00:05:51.783 "nvme_iov_md": false 00:05:51.783 }, 00:05:51.783 "memory_domains": [ 00:05:51.783 { 00:05:51.783 "dma_device_id": "system", 00:05:51.783 "dma_device_type": 1 00:05:51.783 }, 00:05:51.783 { 00:05:51.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.783 "dma_device_type": 2 00:05:51.783 } 00:05:51.783 ], 00:05:51.783 "driver_specific": {} 00:05:51.783 }, 00:05:51.783 { 00:05:51.783 "name": "Passthru0", 00:05:51.783 "aliases": [ 00:05:51.783 "8d15b113-d42a-5f43-a5d4-fd01a05cdc0d" 00:05:51.783 ], 00:05:51.783 "product_name": "passthru", 00:05:51.783 "block_size": 512, 00:05:51.783 "num_blocks": 16384, 00:05:51.783 "uuid": "8d15b113-d42a-5f43-a5d4-fd01a05cdc0d", 00:05:51.783 "assigned_rate_limits": { 00:05:51.783 "rw_ios_per_sec": 0, 00:05:51.783 "rw_mbytes_per_sec": 0, 00:05:51.783 "r_mbytes_per_sec": 0, 00:05:51.783 "w_mbytes_per_sec": 0 00:05:51.783 }, 00:05:51.783 "claimed": false, 00:05:51.783 "zoned": false, 00:05:51.783 "supported_io_types": { 00:05:51.783 "read": true, 00:05:51.783 "write": true, 00:05:51.783 "unmap": true, 00:05:51.783 "flush": true, 00:05:51.783 "reset": true, 00:05:51.783 "nvme_admin": false, 00:05:51.783 "nvme_io": false, 00:05:51.783 "nvme_io_md": false, 00:05:51.783 "write_zeroes": true, 00:05:51.783 "zcopy": true, 00:05:51.783 "get_zone_info": false, 00:05:51.783 "zone_management": false, 00:05:51.783 "zone_append": false, 00:05:51.783 "compare": false, 00:05:51.783 "compare_and_write": false, 00:05:51.783 "abort": true, 00:05:51.783 "seek_hole": false, 00:05:51.783 "seek_data": false, 00:05:51.783 "copy": true, 00:05:51.783 "nvme_iov_md": false 00:05:51.783 }, 00:05:51.783 "memory_domains": [ 00:05:51.783 { 00:05:51.783 "dma_device_id": "system", 00:05:51.783 "dma_device_type": 1 00:05:51.783 }, 00:05:51.783 { 00:05:51.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.783 "dma_device_type": 2 00:05:51.783 } 00:05:51.783 ], 00:05:51.783 "driver_specific": { 00:05:51.783 "passthru": { 00:05:51.783 "name": "Passthru0", 00:05:51.783 "base_bdev_name": "Malloc2" 00:05:51.783 } 00:05:51.783 } 00:05:51.783 } 00:05:51.783 ]' 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.783 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.784 00:05:51.784 real 0m0.307s 00:05:51.784 user 0m0.188s 00:05:51.784 sys 0m0.050s 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.784 12:36:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.784 ************************************ 00:05:51.784 END TEST rpc_daemon_integrity 00:05:51.784 ************************************ 00:05:51.784 12:36:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.784 12:36:21 rpc -- rpc/rpc.sh@84 -- # killprocess 3145093 00:05:51.784 12:36:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 3145093 ']' 00:05:51.784 12:36:21 rpc -- common/autotest_common.sh@958 -- # kill -0 3145093 00:05:51.784 12:36:21 rpc -- common/autotest_common.sh@959 -- # uname 00:05:51.784 12:36:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.784 12:36:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145093 00:05:52.045 12:36:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.045 12:36:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.045 12:36:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145093' 00:05:52.045 killing process with pid 3145093 00:05:52.045 12:36:21 rpc -- common/autotest_common.sh@973 -- # kill 3145093 00:05:52.045 12:36:21 rpc -- common/autotest_common.sh@978 -- # wait 3145093 00:05:52.307 00:05:52.307 real 0m2.732s 00:05:52.307 user 0m3.333s 00:05:52.307 sys 0m0.894s 00:05:52.307 12:36:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.307 12:36:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.307 ************************************ 00:05:52.307 END TEST rpc 00:05:52.307 ************************************ 00:05:52.307 12:36:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:52.307 12:36:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.307 12:36:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.307 12:36:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.307 ************************************ 00:05:52.307 START TEST skip_rpc 00:05:52.307 ************************************ 00:05:52.307 12:36:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:52.307 * Looking for test storage... 00:05:52.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:52.307 12:36:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.307 12:36:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.307 12:36:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.568 12:36:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.568 12:36:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.568 12:36:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.568 12:36:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.568 --rc genhtml_branch_coverage=1 00:05:52.568 --rc genhtml_function_coverage=1 00:05:52.568 --rc genhtml_legend=1 00:05:52.568 --rc geninfo_all_blocks=1 00:05:52.568 --rc geninfo_unexecuted_blocks=1 00:05:52.568 00:05:52.568 ' 00:05:52.568 12:36:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.568 --rc genhtml_branch_coverage=1 00:05:52.568 --rc genhtml_function_coverage=1 00:05:52.568 --rc genhtml_legend=1 00:05:52.568 --rc geninfo_all_blocks=1 00:05:52.568 --rc geninfo_unexecuted_blocks=1 00:05:52.568 00:05:52.568 ' 00:05:52.568 12:36:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.568 --rc genhtml_branch_coverage=1 00:05:52.568 --rc genhtml_function_coverage=1 00:05:52.569 --rc genhtml_legend=1 00:05:52.569 --rc geninfo_all_blocks=1 00:05:52.569 --rc geninfo_unexecuted_blocks=1 00:05:52.569 00:05:52.569 ' 00:05:52.569 12:36:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.569 --rc genhtml_branch_coverage=1 00:05:52.569 --rc genhtml_function_coverage=1 00:05:52.569 --rc genhtml_legend=1 00:05:52.569 --rc geninfo_all_blocks=1 00:05:52.569 --rc geninfo_unexecuted_blocks=1 00:05:52.569 00:05:52.569 ' 00:05:52.569 12:36:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:52.569 12:36:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:52.569 12:36:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:52.569 12:36:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.569 12:36:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.569 12:36:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.569 ************************************ 00:05:52.569 START TEST skip_rpc 00:05:52.569 ************************************ 00:05:52.569 12:36:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:52.569 12:36:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3145879 00:05:52.569 12:36:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.569 12:36:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:52.569 12:36:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:52.569 [2024-11-28 12:36:22.567119] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:52.569 [2024-11-28 12:36:22.567188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145879 ] 00:05:52.830 [2024-11-28 12:36:22.704609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.830 [2024-11-28 12:36:22.764662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.830 [2024-11-28 12:36:22.792768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3145879 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3145879 ']' 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3145879 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145879 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145879' 00:05:58.123 killing process with pid 3145879 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3145879 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3145879 00:05:58.123 00:05:58.123 real 0m5.261s 00:05:58.123 user 0m4.924s 00:05:58.123 sys 0m0.286s 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.123 12:36:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 ************************************ 00:05:58.123 END TEST skip_rpc 00:05:58.123 ************************************ 00:05:58.123 12:36:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:58.123 12:36:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.123 12:36:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.123 12:36:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 ************************************ 00:05:58.123 START TEST skip_rpc_with_json 00:05:58.123 ************************************ 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3146916 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3146916 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3146916 ']' 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.123 12:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 [2024-11-28 12:36:27.902805] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:58.123 [2024-11-28 12:36:27.902855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146916 ] 00:05:58.123 [2024-11-28 12:36:28.035768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.124 [2024-11-28 12:36:28.088681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.124 [2024-11-28 12:36:28.107847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.695 [2024-11-28 12:36:28.704887] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:58.695 request: 00:05:58.695 { 00:05:58.695 "trtype": "tcp", 00:05:58.695 "method": "nvmf_get_transports", 00:05:58.695 "req_id": 1 00:05:58.695 } 00:05:58.695 Got JSON-RPC error response 00:05:58.695 response: 00:05:58.695 { 00:05:58.695 "code": -19, 00:05:58.695 "message": "No such device" 00:05:58.695 } 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.695 [2024-11-28 12:36:28.716952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.695 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.956 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.956 12:36:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.956 { 00:05:58.956 "subsystems": [ 00:05:58.956 { 00:05:58.956 "subsystem": "fsdev", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "fsdev_set_opts", 00:05:58.957 "params": { 00:05:58.957 "fsdev_io_pool_size": 65535, 00:05:58.957 "fsdev_io_cache_size": 256 00:05:58.957 } 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "vfio_user_target", 00:05:58.957 "config": null 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "keyring", 00:05:58.957 "config": [] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "iobuf", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "iobuf_set_options", 00:05:58.957 "params": { 00:05:58.957 "small_pool_count": 8192, 00:05:58.957 "large_pool_count": 1024, 00:05:58.957 "small_bufsize": 8192, 00:05:58.957 "large_bufsize": 135168, 00:05:58.957 "enable_numa": false 00:05:58.957 } 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "sock", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "sock_set_default_impl", 00:05:58.957 "params": { 00:05:58.957 "impl_name": "posix" 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "sock_impl_set_options", 00:05:58.957 "params": { 00:05:58.957 "impl_name": "ssl", 00:05:58.957 "recv_buf_size": 4096, 00:05:58.957 "send_buf_size": 4096, 00:05:58.957 "enable_recv_pipe": true, 00:05:58.957 "enable_quickack": false, 00:05:58.957 "enable_placement_id": 0, 00:05:58.957 "enable_zerocopy_send_server": true, 00:05:58.957 "enable_zerocopy_send_client": false, 00:05:58.957 "zerocopy_threshold": 0, 00:05:58.957 "tls_version": 0, 00:05:58.957 "enable_ktls": false 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "sock_impl_set_options", 00:05:58.957 "params": { 00:05:58.957 "impl_name": "posix", 00:05:58.957 "recv_buf_size": 2097152, 00:05:58.957 "send_buf_size": 2097152, 00:05:58.957 "enable_recv_pipe": true, 00:05:58.957 "enable_quickack": false, 00:05:58.957 "enable_placement_id": 0, 00:05:58.957 "enable_zerocopy_send_server": true, 00:05:58.957 "enable_zerocopy_send_client": false, 00:05:58.957 "zerocopy_threshold": 0, 00:05:58.957 "tls_version": 0, 00:05:58.957 "enable_ktls": false 00:05:58.957 } 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "vmd", 00:05:58.957 "config": [] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "accel", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "accel_set_options", 00:05:58.957 "params": { 00:05:58.957 "small_cache_size": 128, 00:05:58.957 "large_cache_size": 16, 00:05:58.957 "task_count": 2048, 00:05:58.957 "sequence_count": 2048, 00:05:58.957 "buf_count": 2048 00:05:58.957 } 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "bdev", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "bdev_set_options", 00:05:58.957 "params": { 00:05:58.957 "bdev_io_pool_size": 65535, 00:05:58.957 "bdev_io_cache_size": 256, 00:05:58.957 "bdev_auto_examine": true, 00:05:58.957 "iobuf_small_cache_size": 128, 00:05:58.957 "iobuf_large_cache_size": 16 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "bdev_raid_set_options", 00:05:58.957 "params": { 00:05:58.957 "process_window_size_kb": 1024, 00:05:58.957 "process_max_bandwidth_mb_sec": 0 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "bdev_iscsi_set_options", 00:05:58.957 "params": { 00:05:58.957 "timeout_sec": 30 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "bdev_nvme_set_options", 00:05:58.957 "params": { 00:05:58.957 "action_on_timeout": "none", 00:05:58.957 "timeout_us": 0, 00:05:58.957 "timeout_admin_us": 0, 00:05:58.957 "keep_alive_timeout_ms": 10000, 00:05:58.957 "arbitration_burst": 0, 00:05:58.957 "low_priority_weight": 0, 00:05:58.957 "medium_priority_weight": 0, 00:05:58.957 "high_priority_weight": 0, 00:05:58.957 "nvme_adminq_poll_period_us": 10000, 00:05:58.957 "nvme_ioq_poll_period_us": 0, 00:05:58.957 "io_queue_requests": 0, 00:05:58.957 "delay_cmd_submit": true, 00:05:58.957 "transport_retry_count": 4, 00:05:58.957 "bdev_retry_count": 3, 00:05:58.957 "transport_ack_timeout": 0, 00:05:58.957 "ctrlr_loss_timeout_sec": 0, 00:05:58.957 "reconnect_delay_sec": 0, 00:05:58.957 "fast_io_fail_timeout_sec": 0, 00:05:58.957 "disable_auto_failback": false, 00:05:58.957 "generate_uuids": false, 00:05:58.957 "transport_tos": 0, 00:05:58.957 "nvme_error_stat": false, 00:05:58.957 "rdma_srq_size": 0, 00:05:58.957 "io_path_stat": false, 00:05:58.957 "allow_accel_sequence": false, 00:05:58.957 "rdma_max_cq_size": 0, 00:05:58.957 "rdma_cm_event_timeout_ms": 0, 00:05:58.957 "dhchap_digests": [ 00:05:58.957 "sha256", 00:05:58.957 "sha384", 00:05:58.957 "sha512" 00:05:58.957 ], 00:05:58.957 "dhchap_dhgroups": [ 00:05:58.957 "null", 00:05:58.957 "ffdhe2048", 00:05:58.957 "ffdhe3072", 00:05:58.957 "ffdhe4096", 00:05:58.957 "ffdhe6144", 00:05:58.957 "ffdhe8192" 00:05:58.957 ] 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "bdev_nvme_set_hotplug", 00:05:58.957 "params": { 00:05:58.957 "period_us": 100000, 00:05:58.957 "enable": false 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "bdev_wait_for_examine" 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "scsi", 00:05:58.957 "config": null 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "scheduler", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "framework_set_scheduler", 00:05:58.957 "params": { 00:05:58.957 "name": "static" 00:05:58.957 } 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "vhost_scsi", 00:05:58.957 "config": [] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "vhost_blk", 00:05:58.957 "config": [] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "ublk", 00:05:58.957 "config": [] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "nbd", 00:05:58.957 "config": [] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "nvmf", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "nvmf_set_config", 00:05:58.957 "params": { 00:05:58.957 "discovery_filter": "match_any", 00:05:58.957 "admin_cmd_passthru": { 00:05:58.957 "identify_ctrlr": false 00:05:58.957 }, 00:05:58.957 "dhchap_digests": [ 00:05:58.957 "sha256", 00:05:58.957 "sha384", 00:05:58.957 "sha512" 00:05:58.957 ], 00:05:58.957 "dhchap_dhgroups": [ 00:05:58.957 "null", 00:05:58.957 "ffdhe2048", 00:05:58.957 "ffdhe3072", 00:05:58.957 "ffdhe4096", 00:05:58.957 "ffdhe6144", 00:05:58.957 "ffdhe8192" 00:05:58.957 ] 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "nvmf_set_max_subsystems", 00:05:58.957 "params": { 00:05:58.957 "max_subsystems": 1024 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "nvmf_set_crdt", 00:05:58.957 "params": { 00:05:58.957 "crdt1": 0, 00:05:58.957 "crdt2": 0, 00:05:58.957 "crdt3": 0 00:05:58.957 } 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "method": "nvmf_create_transport", 00:05:58.957 "params": { 00:05:58.957 "trtype": "TCP", 00:05:58.957 "max_queue_depth": 128, 00:05:58.957 "max_io_qpairs_per_ctrlr": 127, 00:05:58.957 "in_capsule_data_size": 4096, 00:05:58.957 "max_io_size": 131072, 00:05:58.957 "io_unit_size": 131072, 00:05:58.957 "max_aq_depth": 128, 00:05:58.957 "num_shared_buffers": 511, 00:05:58.957 "buf_cache_size": 4294967295, 00:05:58.957 "dif_insert_or_strip": false, 00:05:58.957 "zcopy": false, 00:05:58.957 "c2h_success": true, 00:05:58.957 "sock_priority": 0, 00:05:58.957 "abort_timeout_sec": 1, 00:05:58.957 "ack_timeout": 0, 00:05:58.957 "data_wr_pool_size": 0 00:05:58.957 } 00:05:58.957 } 00:05:58.957 ] 00:05:58.957 }, 00:05:58.957 { 00:05:58.957 "subsystem": "iscsi", 00:05:58.957 "config": [ 00:05:58.957 { 00:05:58.957 "method": "iscsi_set_options", 00:05:58.957 "params": { 00:05:58.957 "node_base": "iqn.2016-06.io.spdk", 00:05:58.957 "max_sessions": 128, 00:05:58.957 "max_connections_per_session": 2, 00:05:58.957 "max_queue_depth": 64, 00:05:58.957 "default_time2wait": 2, 00:05:58.957 "default_time2retain": 20, 00:05:58.957 "first_burst_length": 8192, 00:05:58.957 "immediate_data": true, 00:05:58.957 "allow_duplicated_isid": false, 00:05:58.957 "error_recovery_level": 0, 00:05:58.957 "nop_timeout": 60, 00:05:58.957 "nop_in_interval": 30, 00:05:58.958 "disable_chap": false, 00:05:58.958 "require_chap": false, 00:05:58.958 "mutual_chap": false, 00:05:58.958 "chap_group": 0, 00:05:58.958 "max_large_datain_per_connection": 64, 00:05:58.958 "max_r2t_per_connection": 4, 00:05:58.958 "pdu_pool_size": 36864, 00:05:58.958 "immediate_data_pool_size": 16384, 00:05:58.958 "data_out_pool_size": 2048 00:05:58.958 } 00:05:58.958 } 00:05:58.958 ] 00:05:58.958 } 00:05:58.958 ] 00:05:58.958 } 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3146916 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3146916 ']' 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3146916 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146916 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146916' 00:05:58.958 killing process with pid 3146916 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3146916 00:05:58.958 12:36:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3146916 00:05:59.219 12:36:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3147260 00:05:59.219 12:36:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.219 12:36:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3147260 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3147260 ']' 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3147260 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3147260 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3147260' 00:06:04.507 killing process with pid 3147260 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3147260 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3147260 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:04.507 00:06:04.507 real 0m6.554s 00:06:04.507 user 0m6.232s 00:06:04.507 sys 0m0.602s 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.507 ************************************ 00:06:04.507 END TEST skip_rpc_with_json 00:06:04.507 ************************************ 00:06:04.507 12:36:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:04.507 12:36:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.507 12:36:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.507 12:36:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.507 ************************************ 00:06:04.507 START TEST skip_rpc_with_delay 00:06:04.507 ************************************ 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:04.507 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.507 [2024-11-28 12:36:34.551566] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:04.508 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:04.508 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.508 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.508 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.508 00:06:04.508 real 0m0.087s 00:06:04.508 user 0m0.059s 00:06:04.508 sys 0m0.028s 00:06:04.508 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.508 12:36:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:04.508 ************************************ 00:06:04.508 END TEST skip_rpc_with_delay 00:06:04.508 ************************************ 00:06:04.508 12:36:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:04.508 12:36:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:04.508 12:36:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:04.508 12:36:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.508 12:36:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.508 12:36:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.769 ************************************ 00:06:04.769 START TEST exit_on_failed_rpc_init 00:06:04.769 ************************************ 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3148328 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3148328 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3148328 ']' 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.769 12:36:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.769 [2024-11-28 12:36:34.707590] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:04.769 [2024-11-28 12:36:34.707639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148328 ] 00:06:04.769 [2024-11-28 12:36:34.840807] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.769 [2024-11-28 12:36:34.893262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.030 [2024-11-28 12:36:34.909999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:05.600 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.600 [2024-11-28 12:36:35.558714] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:05.600 [2024-11-28 12:36:35.558767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148663 ] 00:06:05.600 [2024-11-28 12:36:35.691372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.861 [2024-11-28 12:36:35.750162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.861 [2024-11-28 12:36:35.768083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.861 [2024-11-28 12:36:35.768134] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.861 [2024-11-28 12:36:35.768143] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.861 [2024-11-28 12:36:35.768150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3148328 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3148328 ']' 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3148328 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148328 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148328' 00:06:05.861 killing process with pid 3148328 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3148328 00:06:05.861 12:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3148328 00:06:06.154 00:06:06.154 real 0m1.395s 00:06:06.154 user 0m1.509s 00:06:06.154 sys 0m0.367s 00:06:06.154 12:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.154 12:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.154 ************************************ 00:06:06.154 END TEST exit_on_failed_rpc_init 00:06:06.154 ************************************ 00:06:06.154 12:36:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:06.154 00:06:06.154 real 0m13.815s 00:06:06.154 user 0m12.955s 00:06:06.155 sys 0m1.599s 00:06:06.155 12:36:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.155 12:36:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.155 ************************************ 00:06:06.155 END TEST skip_rpc 00:06:06.155 ************************************ 00:06:06.155 12:36:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:06.155 12:36:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.155 12:36:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.155 12:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.155 ************************************ 00:06:06.155 START TEST rpc_client 00:06:06.155 ************************************ 00:06:06.155 12:36:36 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:06.155 * Looking for test storage... 00:06:06.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:06.155 12:36:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.155 12:36:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.155 12:36:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.417 12:36:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.417 --rc genhtml_branch_coverage=1 00:06:06.417 --rc genhtml_function_coverage=1 00:06:06.417 --rc genhtml_legend=1 00:06:06.417 --rc geninfo_all_blocks=1 00:06:06.417 --rc geninfo_unexecuted_blocks=1 00:06:06.417 00:06:06.417 ' 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.417 --rc genhtml_branch_coverage=1 00:06:06.417 --rc genhtml_function_coverage=1 00:06:06.417 --rc genhtml_legend=1 00:06:06.417 --rc geninfo_all_blocks=1 00:06:06.417 --rc geninfo_unexecuted_blocks=1 00:06:06.417 00:06:06.417 ' 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.417 --rc genhtml_branch_coverage=1 00:06:06.417 --rc genhtml_function_coverage=1 00:06:06.417 --rc genhtml_legend=1 00:06:06.417 --rc geninfo_all_blocks=1 00:06:06.417 --rc geninfo_unexecuted_blocks=1 00:06:06.417 00:06:06.417 ' 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.417 --rc genhtml_branch_coverage=1 00:06:06.417 --rc genhtml_function_coverage=1 00:06:06.417 --rc genhtml_legend=1 00:06:06.417 --rc geninfo_all_blocks=1 00:06:06.417 --rc geninfo_unexecuted_blocks=1 00:06:06.417 00:06:06.417 ' 00:06:06.417 12:36:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:06.417 OK 00:06:06.417 12:36:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.417 00:06:06.417 real 0m0.228s 00:06:06.417 user 0m0.129s 00:06:06.417 sys 0m0.114s 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.417 12:36:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:06.417 ************************************ 00:06:06.417 END TEST rpc_client 00:06:06.417 ************************************ 00:06:06.417 12:36:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:06.417 12:36:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.417 12:36:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.417 12:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.417 ************************************ 00:06:06.417 START TEST json_config 00:06:06.417 ************************************ 00:06:06.417 12:36:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:06.679 12:36:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.679 12:36:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.679 12:36:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.679 12:36:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.679 12:36:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.679 12:36:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.679 12:36:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.679 12:36:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.679 12:36:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.679 12:36:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:06.679 12:36:36 json_config -- scripts/common.sh@345 -- # : 1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.679 12:36:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.679 12:36:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@353 -- # local d=1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.679 12:36:36 json_config -- scripts/common.sh@355 -- # echo 1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.679 12:36:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@353 -- # local d=2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.679 12:36:36 json_config -- scripts/common.sh@355 -- # echo 2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.679 12:36:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.679 12:36:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.679 12:36:36 json_config -- scripts/common.sh@368 -- # return 0 00:06:06.679 12:36:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.679 12:36:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.680 12:36:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.680 12:36:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.680 12:36:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.680 12:36:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.680 12:36:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.680 12:36:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.680 12:36:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.680 12:36:36 json_config -- paths/export.sh@5 -- # export PATH 00:06:06.680 12:36:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@51 -- # : 0 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.680 12:36:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:06.680 INFO: JSON configuration test init 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.680 12:36:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:06.680 12:36:36 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.680 12:36:36 json_config -- json_config/common.sh@10 -- # shift 00:06:06.680 12:36:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.680 12:36:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.680 12:36:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.680 12:36:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.680 12:36:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.680 12:36:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3148877 00:06:06.680 12:36:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.680 Waiting for target to run... 00:06:06.680 12:36:36 json_config -- json_config/common.sh@25 -- # waitforlisten 3148877 /var/tmp/spdk_tgt.sock 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 3148877 ']' 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.680 12:36:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.680 12:36:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.680 [2024-11-28 12:36:36.764998] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:06.680 [2024-11-28 12:36:36.765073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148877 ] 00:06:07.253 [2024-11-28 12:36:37.139567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.253 [2024-11-28 12:36:37.194237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.253 [2024-11-28 12:36:37.203587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.513 12:36:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.513 12:36:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:07.513 12:36:37 json_config -- json_config/common.sh@26 -- # echo '' 00:06:07.513 00:06:07.513 12:36:37 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:07.513 12:36:37 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:07.513 12:36:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.513 12:36:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.513 12:36:37 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:07.513 12:36:37 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:07.513 12:36:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.513 12:36:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.513 12:36:37 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:07.513 12:36:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:07.513 12:36:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.117 12:36:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.117 12:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:08.117 12:36:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:08.117 12:36:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@54 -- # sort 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:08.377 12:36:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.377 12:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:08.377 12:36:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.377 12:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:08.377 12:36:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.377 12:36:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.637 MallocForNvmf0 00:06:08.637 12:36:38 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.637 12:36:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.637 MallocForNvmf1 00:06:08.637 12:36:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.637 12:36:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.899 [2024-11-28 12:36:38.867354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.899 12:36:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.899 12:36:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.189 12:36:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.189 12:36:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.189 12:36:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.189 12:36:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.559 12:36:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.559 12:36:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.559 [2024-11-28 12:36:39.535845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.559 12:36:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:09.559 12:36:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.559 12:36:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 12:36:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:09.559 12:36:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.559 12:36:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 12:36:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:09.559 12:36:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.559 12:36:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.820 MallocBdevForConfigChangeCheck 00:06:09.820 12:36:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:09.820 12:36:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.820 12:36:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.820 12:36:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:09.821 12:36:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.081 12:36:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:10.081 INFO: shutting down applications... 00:06:10.081 12:36:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:10.081 12:36:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:10.081 12:36:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:10.081 12:36:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:10.653 Calling clear_iscsi_subsystem 00:06:10.653 Calling clear_nvmf_subsystem 00:06:10.653 Calling clear_nbd_subsystem 00:06:10.653 Calling clear_ublk_subsystem 00:06:10.653 Calling clear_vhost_blk_subsystem 00:06:10.653 Calling clear_vhost_scsi_subsystem 00:06:10.653 Calling clear_bdev_subsystem 00:06:10.653 12:36:40 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:10.653 12:36:40 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:10.653 12:36:40 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:10.653 12:36:40 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.653 12:36:40 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:10.653 12:36:40 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:10.914 12:36:40 json_config -- json_config/json_config.sh@352 -- # break 00:06:10.914 12:36:40 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:10.914 12:36:40 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:10.914 12:36:40 json_config -- json_config/common.sh@31 -- # local app=target 00:06:10.914 12:36:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.914 12:36:40 json_config -- json_config/common.sh@35 -- # [[ -n 3148877 ]] 00:06:10.914 12:36:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3148877 00:06:10.914 12:36:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.914 12:36:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.914 12:36:40 json_config -- json_config/common.sh@41 -- # kill -0 3148877 00:06:10.914 12:36:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.488 12:36:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.488 12:36:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.488 12:36:41 json_config -- json_config/common.sh@41 -- # kill -0 3148877 00:06:11.488 12:36:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.488 12:36:41 json_config -- json_config/common.sh@43 -- # break 00:06:11.488 12:36:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.488 12:36:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.488 SPDK target shutdown done 00:06:11.488 12:36:41 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:11.488 INFO: relaunching applications... 00:06:11.488 12:36:41 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.488 12:36:41 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.488 12:36:41 json_config -- json_config/common.sh@10 -- # shift 00:06:11.488 12:36:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.488 12:36:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.488 12:36:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.488 12:36:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.488 12:36:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.488 12:36:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3149947 00:06:11.488 12:36:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.488 Waiting for target to run... 00:06:11.488 12:36:41 json_config -- json_config/common.sh@25 -- # waitforlisten 3149947 /var/tmp/spdk_tgt.sock 00:06:11.488 12:36:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.488 12:36:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 3149947 ']' 00:06:11.488 12:36:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.488 12:36:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.488 12:36:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.488 12:36:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.488 12:36:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.488 [2024-11-28 12:36:41.501445] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:11.488 [2024-11-28 12:36:41.501504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149947 ] 00:06:12.060 [2024-11-28 12:36:41.912568] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.060 [2024-11-28 12:36:41.968730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.060 [2024-11-28 12:36:41.979634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.632 [2024-11-28 12:36:42.451816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.633 [2024-11-28 12:36:42.484117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:12.633 12:36:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.633 12:36:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:12.633 12:36:42 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.633 00:06:12.633 12:36:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:12.633 12:36:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.633 INFO: Checking if target configuration is the same... 00:06:12.633 12:36:42 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.633 12:36:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:12.633 12:36:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.633 + '[' 2 -ne 2 ']' 00:06:12.633 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.633 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:12.633 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.633 +++ basename /dev/fd/62 00:06:12.633 ++ mktemp /tmp/62.XXX 00:06:12.633 + tmp_file_1=/tmp/62.Wvf 00:06:12.633 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.633 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.633 + tmp_file_2=/tmp/spdk_tgt_config.json.omp 00:06:12.633 + ret=0 00:06:12.633 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.893 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.893 + diff -u /tmp/62.Wvf /tmp/spdk_tgt_config.json.omp 00:06:12.893 + echo 'INFO: JSON config files are the same' 00:06:12.893 INFO: JSON config files are the same 00:06:12.893 + rm /tmp/62.Wvf /tmp/spdk_tgt_config.json.omp 00:06:12.893 + exit 0 00:06:12.893 12:36:42 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:12.893 12:36:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.893 INFO: changing configuration and checking if this can be detected... 00:06:12.894 12:36:42 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.894 12:36:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.155 12:36:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:13.156 12:36:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.156 12:36:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.156 + '[' 2 -ne 2 ']' 00:06:13.156 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:13.156 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:13.156 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:13.156 +++ basename /dev/fd/62 00:06:13.156 ++ mktemp /tmp/62.XXX 00:06:13.156 + tmp_file_1=/tmp/62.F8V 00:06:13.156 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.156 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.156 + tmp_file_2=/tmp/spdk_tgt_config.json.KO1 00:06:13.156 + ret=0 00:06:13.156 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.417 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.417 + diff -u /tmp/62.F8V /tmp/spdk_tgt_config.json.KO1 00:06:13.417 + ret=1 00:06:13.418 + echo '=== Start of file: /tmp/62.F8V ===' 00:06:13.418 + cat /tmp/62.F8V 00:06:13.418 + echo '=== End of file: /tmp/62.F8V ===' 00:06:13.418 + echo '' 00:06:13.418 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KO1 ===' 00:06:13.418 + cat /tmp/spdk_tgt_config.json.KO1 00:06:13.418 + echo '=== End of file: /tmp/spdk_tgt_config.json.KO1 ===' 00:06:13.418 + echo '' 00:06:13.418 + rm /tmp/62.F8V /tmp/spdk_tgt_config.json.KO1 00:06:13.418 + exit 1 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:13.418 INFO: configuration change detected. 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 3149947 ]] 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.418 12:36:43 json_config -- json_config/json_config.sh@330 -- # killprocess 3149947 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@954 -- # '[' -z 3149947 ']' 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@958 -- # kill -0 3149947 00:06:13.418 12:36:43 json_config -- common/autotest_common.sh@959 -- # uname 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3149947 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3149947' 00:06:13.678 killing process with pid 3149947 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@973 -- # kill 3149947 00:06:13.678 12:36:43 json_config -- common/autotest_common.sh@978 -- # wait 3149947 00:06:13.939 12:36:43 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.939 12:36:43 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:13.939 12:36:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.939 12:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.939 12:36:43 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:13.939 12:36:43 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:13.939 INFO: Success 00:06:13.939 00:06:13.939 real 0m7.433s 00:06:13.939 user 0m8.632s 00:06:13.939 sys 0m2.094s 00:06:13.939 12:36:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.940 12:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.940 ************************************ 00:06:13.940 END TEST json_config 00:06:13.940 ************************************ 00:06:13.940 12:36:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.940 12:36:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.940 12:36:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.940 12:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.940 ************************************ 00:06:13.940 START TEST json_config_extra_key 00:06:13.940 ************************************ 00:06:13.940 12:36:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.940 12:36:44 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.940 12:36:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.940 12:36:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.202 12:36:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:14.202 12:36:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.202 12:36:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.202 --rc genhtml_branch_coverage=1 00:06:14.202 --rc genhtml_function_coverage=1 00:06:14.202 --rc genhtml_legend=1 00:06:14.202 --rc geninfo_all_blocks=1 00:06:14.202 --rc geninfo_unexecuted_blocks=1 00:06:14.202 00:06:14.202 ' 00:06:14.202 12:36:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.202 --rc genhtml_branch_coverage=1 00:06:14.202 --rc genhtml_function_coverage=1 00:06:14.202 --rc genhtml_legend=1 00:06:14.202 --rc geninfo_all_blocks=1 00:06:14.202 --rc geninfo_unexecuted_blocks=1 00:06:14.202 00:06:14.202 ' 00:06:14.202 12:36:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.202 --rc genhtml_branch_coverage=1 00:06:14.202 --rc genhtml_function_coverage=1 00:06:14.202 --rc genhtml_legend=1 00:06:14.202 --rc geninfo_all_blocks=1 00:06:14.202 --rc geninfo_unexecuted_blocks=1 00:06:14.202 00:06:14.202 ' 00:06:14.202 12:36:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.202 --rc genhtml_branch_coverage=1 00:06:14.202 --rc genhtml_function_coverage=1 00:06:14.202 --rc genhtml_legend=1 00:06:14.202 --rc geninfo_all_blocks=1 00:06:14.202 --rc geninfo_unexecuted_blocks=1 00:06:14.202 00:06:14.202 ' 00:06:14.202 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.202 12:36:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.202 12:36:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.203 12:36:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.203 12:36:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.203 12:36:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.203 12:36:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.203 12:36:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.203 12:36:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.203 12:36:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.203 INFO: launching applications... 00:06:14.203 12:36:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3150729 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.203 Waiting for target to run... 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3150729 /var/tmp/spdk_tgt.sock 00:06:14.203 12:36:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3150729 ']' 00:06:14.203 12:36:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.203 12:36:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.203 12:36:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.203 12:36:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.203 12:36:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.203 12:36:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.203 [2024-11-28 12:36:44.242859] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:14.203 [2024-11-28 12:36:44.242923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150729 ] 00:06:14.464 [2024-11-28 12:36:44.558574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.725 [2024-11-28 12:36:44.614322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.725 [2024-11-28 12:36:44.625562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.986 12:36:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.986 12:36:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.986 00:06:14.986 12:36:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.986 INFO: shutting down applications... 00:06:14.986 12:36:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3150729 ]] 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3150729 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3150729 00:06:14.986 12:36:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3150729 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.558 12:36:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.558 SPDK target shutdown done 00:06:15.558 12:36:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.558 Success 00:06:15.558 00:06:15.558 real 0m1.555s 00:06:15.558 user 0m1.083s 00:06:15.558 sys 0m0.381s 00:06:15.558 12:36:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.558 12:36:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.558 ************************************ 00:06:15.558 END TEST json_config_extra_key 00:06:15.558 ************************************ 00:06:15.558 12:36:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.558 12:36:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.558 12:36:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.558 12:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.558 ************************************ 00:06:15.558 START TEST alias_rpc 00:06:15.558 ************************************ 00:06:15.558 12:36:45 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.819 * Looking for test storage... 00:06:15.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.819 12:36:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.819 --rc genhtml_branch_coverage=1 00:06:15.819 --rc genhtml_function_coverage=1 00:06:15.819 --rc genhtml_legend=1 00:06:15.819 --rc geninfo_all_blocks=1 00:06:15.819 --rc geninfo_unexecuted_blocks=1 00:06:15.819 00:06:15.819 ' 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.819 --rc genhtml_branch_coverage=1 00:06:15.819 --rc genhtml_function_coverage=1 00:06:15.819 --rc genhtml_legend=1 00:06:15.819 --rc geninfo_all_blocks=1 00:06:15.819 --rc geninfo_unexecuted_blocks=1 00:06:15.819 00:06:15.819 ' 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.819 --rc genhtml_branch_coverage=1 00:06:15.819 --rc genhtml_function_coverage=1 00:06:15.819 --rc genhtml_legend=1 00:06:15.819 --rc geninfo_all_blocks=1 00:06:15.819 --rc geninfo_unexecuted_blocks=1 00:06:15.819 00:06:15.819 ' 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.819 --rc genhtml_branch_coverage=1 00:06:15.819 --rc genhtml_function_coverage=1 00:06:15.819 --rc genhtml_legend=1 00:06:15.819 --rc geninfo_all_blocks=1 00:06:15.819 --rc geninfo_unexecuted_blocks=1 00:06:15.819 00:06:15.819 ' 00:06:15.819 12:36:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.819 12:36:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3151122 00:06:15.819 12:36:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3151122 00:06:15.819 12:36:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3151122 ']' 00:06:15.819 12:36:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.820 12:36:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.820 12:36:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.820 12:36:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.820 12:36:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.820 12:36:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.820 [2024-11-28 12:36:45.880919] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:15.820 [2024-11-28 12:36:45.880991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151122 ] 00:06:16.080 [2024-11-28 12:36:46.017366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.080 [2024-11-28 12:36:46.070762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.080 [2024-11-28 12:36:46.093689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.650 12:36:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.650 12:36:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.651 12:36:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:16.913 12:36:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3151122 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3151122 ']' 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3151122 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151122 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151122' 00:06:16.913 killing process with pid 3151122 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 3151122 00:06:16.913 12:36:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 3151122 00:06:17.204 00:06:17.204 real 0m1.477s 00:06:17.204 user 0m1.489s 00:06:17.204 sys 0m0.441s 00:06:17.204 12:36:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.204 12:36:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.204 ************************************ 00:06:17.204 END TEST alias_rpc 00:06:17.204 ************************************ 00:06:17.204 12:36:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:17.204 12:36:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.204 12:36:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.204 12:36:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.204 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.204 ************************************ 00:06:17.204 START TEST spdkcli_tcp 00:06:17.204 ************************************ 00:06:17.204 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.204 * Looking for test storage... 00:06:17.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:17.204 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.204 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.204 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.466 12:36:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.466 --rc genhtml_branch_coverage=1 00:06:17.466 --rc genhtml_function_coverage=1 00:06:17.466 --rc genhtml_legend=1 00:06:17.466 --rc geninfo_all_blocks=1 00:06:17.466 --rc geninfo_unexecuted_blocks=1 00:06:17.466 00:06:17.466 ' 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.466 --rc genhtml_branch_coverage=1 00:06:17.466 --rc genhtml_function_coverage=1 00:06:17.466 --rc genhtml_legend=1 00:06:17.466 --rc geninfo_all_blocks=1 00:06:17.466 --rc geninfo_unexecuted_blocks=1 00:06:17.466 00:06:17.466 ' 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.466 --rc genhtml_branch_coverage=1 00:06:17.466 --rc genhtml_function_coverage=1 00:06:17.466 --rc genhtml_legend=1 00:06:17.466 --rc geninfo_all_blocks=1 00:06:17.466 --rc geninfo_unexecuted_blocks=1 00:06:17.466 00:06:17.466 ' 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.466 --rc genhtml_branch_coverage=1 00:06:17.466 --rc genhtml_function_coverage=1 00:06:17.466 --rc genhtml_legend=1 00:06:17.466 --rc geninfo_all_blocks=1 00:06:17.466 --rc geninfo_unexecuted_blocks=1 00:06:17.466 00:06:17.466 ' 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3151518 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3151518 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3151518 ']' 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.466 12:36:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.466 12:36:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.466 [2024-11-28 12:36:47.448977] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:17.466 [2024-11-28 12:36:47.449054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151518 ] 00:06:17.466 [2024-11-28 12:36:47.586632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.727 [2024-11-28 12:36:47.641702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.727 [2024-11-28 12:36:47.659740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.727 [2024-11-28 12:36:47.659740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.300 12:36:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.300 12:36:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:18.300 12:36:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3151535 00:06:18.300 12:36:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.300 12:36:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.300 [ 00:06:18.300 "bdev_malloc_delete", 00:06:18.300 "bdev_malloc_create", 00:06:18.300 "bdev_null_resize", 00:06:18.300 "bdev_null_delete", 00:06:18.300 "bdev_null_create", 00:06:18.300 "bdev_nvme_cuse_unregister", 00:06:18.300 "bdev_nvme_cuse_register", 00:06:18.300 "bdev_opal_new_user", 00:06:18.300 "bdev_opal_set_lock_state", 00:06:18.300 "bdev_opal_delete", 00:06:18.300 "bdev_opal_get_info", 00:06:18.300 "bdev_opal_create", 00:06:18.300 "bdev_nvme_opal_revert", 00:06:18.300 "bdev_nvme_opal_init", 00:06:18.300 "bdev_nvme_send_cmd", 00:06:18.300 "bdev_nvme_set_keys", 00:06:18.300 "bdev_nvme_get_path_iostat", 00:06:18.300 "bdev_nvme_get_mdns_discovery_info", 00:06:18.301 "bdev_nvme_stop_mdns_discovery", 00:06:18.301 "bdev_nvme_start_mdns_discovery", 00:06:18.301 "bdev_nvme_set_multipath_policy", 00:06:18.301 "bdev_nvme_set_preferred_path", 00:06:18.301 "bdev_nvme_get_io_paths", 00:06:18.301 "bdev_nvme_remove_error_injection", 00:06:18.301 "bdev_nvme_add_error_injection", 00:06:18.301 "bdev_nvme_get_discovery_info", 00:06:18.301 "bdev_nvme_stop_discovery", 00:06:18.301 "bdev_nvme_start_discovery", 00:06:18.301 "bdev_nvme_get_controller_health_info", 00:06:18.301 "bdev_nvme_disable_controller", 00:06:18.301 "bdev_nvme_enable_controller", 00:06:18.301 "bdev_nvme_reset_controller", 00:06:18.301 "bdev_nvme_get_transport_statistics", 00:06:18.301 "bdev_nvme_apply_firmware", 00:06:18.301 "bdev_nvme_detach_controller", 00:06:18.301 "bdev_nvme_get_controllers", 00:06:18.301 "bdev_nvme_attach_controller", 00:06:18.301 "bdev_nvme_set_hotplug", 00:06:18.301 "bdev_nvme_set_options", 00:06:18.301 "bdev_passthru_delete", 00:06:18.301 "bdev_passthru_create", 00:06:18.301 "bdev_lvol_set_parent_bdev", 00:06:18.301 "bdev_lvol_set_parent", 00:06:18.301 "bdev_lvol_check_shallow_copy", 00:06:18.301 "bdev_lvol_start_shallow_copy", 00:06:18.301 "bdev_lvol_grow_lvstore", 00:06:18.301 "bdev_lvol_get_lvols", 00:06:18.301 "bdev_lvol_get_lvstores", 00:06:18.301 "bdev_lvol_delete", 00:06:18.301 "bdev_lvol_set_read_only", 00:06:18.301 "bdev_lvol_resize", 00:06:18.301 "bdev_lvol_decouple_parent", 00:06:18.301 "bdev_lvol_inflate", 00:06:18.301 "bdev_lvol_rename", 00:06:18.301 "bdev_lvol_clone_bdev", 00:06:18.301 "bdev_lvol_clone", 00:06:18.301 "bdev_lvol_snapshot", 00:06:18.301 "bdev_lvol_create", 00:06:18.301 "bdev_lvol_delete_lvstore", 00:06:18.301 "bdev_lvol_rename_lvstore", 00:06:18.301 "bdev_lvol_create_lvstore", 00:06:18.301 "bdev_raid_set_options", 00:06:18.301 "bdev_raid_remove_base_bdev", 00:06:18.301 "bdev_raid_add_base_bdev", 00:06:18.301 "bdev_raid_delete", 00:06:18.301 "bdev_raid_create", 00:06:18.301 "bdev_raid_get_bdevs", 00:06:18.301 "bdev_error_inject_error", 00:06:18.301 "bdev_error_delete", 00:06:18.301 "bdev_error_create", 00:06:18.301 "bdev_split_delete", 00:06:18.301 "bdev_split_create", 00:06:18.301 "bdev_delay_delete", 00:06:18.301 "bdev_delay_create", 00:06:18.301 "bdev_delay_update_latency", 00:06:18.301 "bdev_zone_block_delete", 00:06:18.301 "bdev_zone_block_create", 00:06:18.301 "blobfs_create", 00:06:18.301 "blobfs_detect", 00:06:18.301 "blobfs_set_cache_size", 00:06:18.301 "bdev_aio_delete", 00:06:18.301 "bdev_aio_rescan", 00:06:18.301 "bdev_aio_create", 00:06:18.301 "bdev_ftl_set_property", 00:06:18.301 "bdev_ftl_get_properties", 00:06:18.301 "bdev_ftl_get_stats", 00:06:18.301 "bdev_ftl_unmap", 00:06:18.301 "bdev_ftl_unload", 00:06:18.301 "bdev_ftl_delete", 00:06:18.301 "bdev_ftl_load", 00:06:18.301 "bdev_ftl_create", 00:06:18.301 "bdev_virtio_attach_controller", 00:06:18.301 "bdev_virtio_scsi_get_devices", 00:06:18.301 "bdev_virtio_detach_controller", 00:06:18.301 "bdev_virtio_blk_set_hotplug", 00:06:18.301 "bdev_iscsi_delete", 00:06:18.301 "bdev_iscsi_create", 00:06:18.301 "bdev_iscsi_set_options", 00:06:18.301 "accel_error_inject_error", 00:06:18.301 "ioat_scan_accel_module", 00:06:18.301 "dsa_scan_accel_module", 00:06:18.301 "iaa_scan_accel_module", 00:06:18.301 "vfu_virtio_create_fs_endpoint", 00:06:18.301 "vfu_virtio_create_scsi_endpoint", 00:06:18.301 "vfu_virtio_scsi_remove_target", 00:06:18.301 "vfu_virtio_scsi_add_target", 00:06:18.301 "vfu_virtio_create_blk_endpoint", 00:06:18.301 "vfu_virtio_delete_endpoint", 00:06:18.301 "keyring_file_remove_key", 00:06:18.301 "keyring_file_add_key", 00:06:18.301 "keyring_linux_set_options", 00:06:18.301 "fsdev_aio_delete", 00:06:18.301 "fsdev_aio_create", 00:06:18.301 "iscsi_get_histogram", 00:06:18.301 "iscsi_enable_histogram", 00:06:18.301 "iscsi_set_options", 00:06:18.301 "iscsi_get_auth_groups", 00:06:18.301 "iscsi_auth_group_remove_secret", 00:06:18.301 "iscsi_auth_group_add_secret", 00:06:18.301 "iscsi_delete_auth_group", 00:06:18.301 "iscsi_create_auth_group", 00:06:18.301 "iscsi_set_discovery_auth", 00:06:18.301 "iscsi_get_options", 00:06:18.301 "iscsi_target_node_request_logout", 00:06:18.301 "iscsi_target_node_set_redirect", 00:06:18.301 "iscsi_target_node_set_auth", 00:06:18.301 "iscsi_target_node_add_lun", 00:06:18.301 "iscsi_get_stats", 00:06:18.301 "iscsi_get_connections", 00:06:18.301 "iscsi_portal_group_set_auth", 00:06:18.301 "iscsi_start_portal_group", 00:06:18.301 "iscsi_delete_portal_group", 00:06:18.301 "iscsi_create_portal_group", 00:06:18.301 "iscsi_get_portal_groups", 00:06:18.301 "iscsi_delete_target_node", 00:06:18.301 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.301 "iscsi_target_node_add_pg_ig_maps", 00:06:18.301 "iscsi_create_target_node", 00:06:18.301 "iscsi_get_target_nodes", 00:06:18.301 "iscsi_delete_initiator_group", 00:06:18.301 "iscsi_initiator_group_remove_initiators", 00:06:18.301 "iscsi_initiator_group_add_initiators", 00:06:18.301 "iscsi_create_initiator_group", 00:06:18.301 "iscsi_get_initiator_groups", 00:06:18.301 "nvmf_set_crdt", 00:06:18.301 "nvmf_set_config", 00:06:18.301 "nvmf_set_max_subsystems", 00:06:18.301 "nvmf_stop_mdns_prr", 00:06:18.301 "nvmf_publish_mdns_prr", 00:06:18.301 "nvmf_subsystem_get_listeners", 00:06:18.301 "nvmf_subsystem_get_qpairs", 00:06:18.301 "nvmf_subsystem_get_controllers", 00:06:18.301 "nvmf_get_stats", 00:06:18.301 "nvmf_get_transports", 00:06:18.301 "nvmf_create_transport", 00:06:18.301 "nvmf_get_targets", 00:06:18.301 "nvmf_delete_target", 00:06:18.301 "nvmf_create_target", 00:06:18.301 "nvmf_subsystem_allow_any_host", 00:06:18.301 "nvmf_subsystem_set_keys", 00:06:18.301 "nvmf_subsystem_remove_host", 00:06:18.301 "nvmf_subsystem_add_host", 00:06:18.301 "nvmf_ns_remove_host", 00:06:18.301 "nvmf_ns_add_host", 00:06:18.301 "nvmf_subsystem_remove_ns", 00:06:18.301 "nvmf_subsystem_set_ns_ana_group", 00:06:18.301 "nvmf_subsystem_add_ns", 00:06:18.301 "nvmf_subsystem_listener_set_ana_state", 00:06:18.301 "nvmf_discovery_get_referrals", 00:06:18.301 "nvmf_discovery_remove_referral", 00:06:18.301 "nvmf_discovery_add_referral", 00:06:18.301 "nvmf_subsystem_remove_listener", 00:06:18.301 "nvmf_subsystem_add_listener", 00:06:18.301 "nvmf_delete_subsystem", 00:06:18.301 "nvmf_create_subsystem", 00:06:18.301 "nvmf_get_subsystems", 00:06:18.301 "env_dpdk_get_mem_stats", 00:06:18.301 "nbd_get_disks", 00:06:18.301 "nbd_stop_disk", 00:06:18.301 "nbd_start_disk", 00:06:18.301 "ublk_recover_disk", 00:06:18.301 "ublk_get_disks", 00:06:18.301 "ublk_stop_disk", 00:06:18.301 "ublk_start_disk", 00:06:18.301 "ublk_destroy_target", 00:06:18.301 "ublk_create_target", 00:06:18.301 "virtio_blk_create_transport", 00:06:18.301 "virtio_blk_get_transports", 00:06:18.301 "vhost_controller_set_coalescing", 00:06:18.301 "vhost_get_controllers", 00:06:18.301 "vhost_delete_controller", 00:06:18.301 "vhost_create_blk_controller", 00:06:18.301 "vhost_scsi_controller_remove_target", 00:06:18.301 "vhost_scsi_controller_add_target", 00:06:18.301 "vhost_start_scsi_controller", 00:06:18.301 "vhost_create_scsi_controller", 00:06:18.301 "thread_set_cpumask", 00:06:18.301 "scheduler_set_options", 00:06:18.301 "framework_get_governor", 00:06:18.301 "framework_get_scheduler", 00:06:18.301 "framework_set_scheduler", 00:06:18.301 "framework_get_reactors", 00:06:18.301 "thread_get_io_channels", 00:06:18.301 "thread_get_pollers", 00:06:18.301 "thread_get_stats", 00:06:18.301 "framework_monitor_context_switch", 00:06:18.301 "spdk_kill_instance", 00:06:18.301 "log_enable_timestamps", 00:06:18.301 "log_get_flags", 00:06:18.301 "log_clear_flag", 00:06:18.301 "log_set_flag", 00:06:18.301 "log_get_level", 00:06:18.301 "log_set_level", 00:06:18.301 "log_get_print_level", 00:06:18.301 "log_set_print_level", 00:06:18.301 "framework_enable_cpumask_locks", 00:06:18.301 "framework_disable_cpumask_locks", 00:06:18.301 "framework_wait_init", 00:06:18.301 "framework_start_init", 00:06:18.301 "scsi_get_devices", 00:06:18.301 "bdev_get_histogram", 00:06:18.301 "bdev_enable_histogram", 00:06:18.301 "bdev_set_qos_limit", 00:06:18.301 "bdev_set_qd_sampling_period", 00:06:18.301 "bdev_get_bdevs", 00:06:18.301 "bdev_reset_iostat", 00:06:18.301 "bdev_get_iostat", 00:06:18.301 "bdev_examine", 00:06:18.301 "bdev_wait_for_examine", 00:06:18.301 "bdev_set_options", 00:06:18.301 "accel_get_stats", 00:06:18.301 "accel_set_options", 00:06:18.301 "accel_set_driver", 00:06:18.301 "accel_crypto_key_destroy", 00:06:18.301 "accel_crypto_keys_get", 00:06:18.301 "accel_crypto_key_create", 00:06:18.301 "accel_assign_opc", 00:06:18.301 "accel_get_module_info", 00:06:18.301 "accel_get_opc_assignments", 00:06:18.301 "vmd_rescan", 00:06:18.301 "vmd_remove_device", 00:06:18.301 "vmd_enable", 00:06:18.301 "sock_get_default_impl", 00:06:18.301 "sock_set_default_impl", 00:06:18.301 "sock_impl_set_options", 00:06:18.301 "sock_impl_get_options", 00:06:18.301 "iobuf_get_stats", 00:06:18.301 "iobuf_set_options", 00:06:18.301 "keyring_get_keys", 00:06:18.301 "vfu_tgt_set_base_path", 00:06:18.301 "framework_get_pci_devices", 00:06:18.301 "framework_get_config", 00:06:18.301 "framework_get_subsystems", 00:06:18.301 "fsdev_set_opts", 00:06:18.301 "fsdev_get_opts", 00:06:18.301 "trace_get_info", 00:06:18.301 "trace_get_tpoint_group_mask", 00:06:18.301 "trace_disable_tpoint_group", 00:06:18.301 "trace_enable_tpoint_group", 00:06:18.301 "trace_clear_tpoint_mask", 00:06:18.302 "trace_set_tpoint_mask", 00:06:18.302 "notify_get_notifications", 00:06:18.302 "notify_get_types", 00:06:18.302 "spdk_get_version", 00:06:18.302 "rpc_get_methods" 00:06:18.302 ] 00:06:18.563 12:36:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.563 12:36:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.563 12:36:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3151518 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3151518 ']' 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3151518 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151518 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151518' 00:06:18.563 killing process with pid 3151518 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3151518 00:06:18.563 12:36:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3151518 00:06:18.825 00:06:18.825 real 0m1.554s 00:06:18.825 user 0m2.656s 00:06:18.825 sys 0m0.476s 00:06:18.825 12:36:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.825 12:36:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.825 ************************************ 00:06:18.825 END TEST spdkcli_tcp 00:06:18.825 ************************************ 00:06:18.825 12:36:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.825 12:36:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.825 12:36:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.825 12:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:18.825 ************************************ 00:06:18.825 START TEST dpdk_mem_utility 00:06:18.825 ************************************ 00:06:18.825 12:36:48 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.825 * Looking for test storage... 00:06:18.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:18.825 12:36:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.825 12:36:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.825 12:36:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.086 12:36:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.086 12:36:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.086 12:36:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.086 12:36:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.086 12:36:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.086 12:36:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:19.087 12:36:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.087 12:36:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.087 --rc genhtml_branch_coverage=1 00:06:19.087 --rc genhtml_function_coverage=1 00:06:19.087 --rc genhtml_legend=1 00:06:19.087 --rc geninfo_all_blocks=1 00:06:19.087 --rc geninfo_unexecuted_blocks=1 00:06:19.087 00:06:19.087 ' 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.087 --rc genhtml_branch_coverage=1 00:06:19.087 --rc genhtml_function_coverage=1 00:06:19.087 --rc genhtml_legend=1 00:06:19.087 --rc geninfo_all_blocks=1 00:06:19.087 --rc geninfo_unexecuted_blocks=1 00:06:19.087 00:06:19.087 ' 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.087 --rc genhtml_branch_coverage=1 00:06:19.087 --rc genhtml_function_coverage=1 00:06:19.087 --rc genhtml_legend=1 00:06:19.087 --rc geninfo_all_blocks=1 00:06:19.087 --rc geninfo_unexecuted_blocks=1 00:06:19.087 00:06:19.087 ' 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.087 --rc genhtml_branch_coverage=1 00:06:19.087 --rc genhtml_function_coverage=1 00:06:19.087 --rc genhtml_legend=1 00:06:19.087 --rc geninfo_all_blocks=1 00:06:19.087 --rc geninfo_unexecuted_blocks=1 00:06:19.087 00:06:19.087 ' 00:06:19.087 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.087 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3151893 00:06:19.087 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3151893 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3151893 ']' 00:06:19.087 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.087 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.087 [2024-11-28 12:36:49.073887] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:19.087 [2024-11-28 12:36:49.073965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151893 ] 00:06:19.087 [2024-11-28 12:36:49.210693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.348 [2024-11-28 12:36:49.265438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.348 [2024-11-28 12:36:49.288431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.919 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.919 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:19.919 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.919 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.920 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.920 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.920 { 00:06:19.920 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.920 } 00:06:19.920 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.920 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.920 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:19.920 1 heaps totaling size 818.000000 MiB 00:06:19.920 size: 818.000000 MiB heap id: 0 00:06:19.920 end heaps---------- 00:06:19.920 9 mempools totaling size 603.782043 MiB 00:06:19.920 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.920 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.920 size: 100.555481 MiB name: bdev_io_3151893 00:06:19.920 size: 50.003479 MiB name: msgpool_3151893 00:06:19.920 size: 36.509338 MiB name: fsdev_io_3151893 00:06:19.920 size: 21.763794 MiB name: PDU_Pool 00:06:19.920 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.920 size: 4.133484 MiB name: evtpool_3151893 00:06:19.920 size: 0.026123 MiB name: Session_Pool 00:06:19.920 end mempools------- 00:06:19.920 6 memzones totaling size 4.142822 MiB 00:06:19.920 size: 1.000366 MiB name: RG_ring_0_3151893 00:06:19.920 size: 1.000366 MiB name: RG_ring_1_3151893 00:06:19.920 size: 1.000366 MiB name: RG_ring_4_3151893 00:06:19.920 size: 1.000366 MiB name: RG_ring_5_3151893 00:06:19.920 size: 0.125366 MiB name: RG_ring_2_3151893 00:06:19.920 size: 0.015991 MiB name: RG_ring_3_3151893 00:06:19.920 end memzones------- 00:06:19.920 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.920 heap id: 0 total size: 818.000000 MiB number of busy elements: 43 number of free elements: 15 00:06:19.920 list of free elements. size: 10.993225 MiB 00:06:19.920 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:19.920 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:19.920 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:19.920 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:19.920 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:19.920 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:19.920 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:19.920 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:19.920 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:19.920 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:19.920 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:19.920 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:19.920 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:19.920 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:19.920 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:19.920 list of standard malloc elements. size: 199.077881 MiB 00:06:19.920 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:19.920 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:19.920 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:19.920 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:19.920 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:19.920 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:19.920 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.920 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:19.920 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:19.920 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:19.920 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:19.920 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:19.920 list of memzone associated elements. size: 607.928894 MiB 00:06:19.920 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:19.920 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.920 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:19.920 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.920 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:19.920 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3151893_0 00:06:19.920 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:19.920 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3151893_0 00:06:19.920 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:19.920 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3151893_0 00:06:19.920 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:19.920 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.920 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:19.920 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.920 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:19.920 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3151893_0 00:06:19.920 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:19.920 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3151893 00:06:19.920 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:19.920 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3151893 00:06:19.920 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:19.920 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.920 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:19.920 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.920 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:19.920 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.920 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:19.920 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.920 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:19.920 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3151893 00:06:19.920 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:19.920 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3151893 00:06:19.920 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:19.920 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3151893 00:06:19.920 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:19.920 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3151893 00:06:19.920 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:19.920 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3151893 00:06:19.920 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:19.920 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3151893 00:06:19.920 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:19.920 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.920 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:19.920 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.920 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:19.920 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.920 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:19.920 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3151893 00:06:19.920 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:19.920 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3151893 00:06:19.920 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:19.920 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.920 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:19.920 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.920 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:19.920 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3151893 00:06:19.920 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:19.920 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.921 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:19.921 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3151893 00:06:19.921 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:19.921 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3151893 00:06:19.921 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:19.921 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3151893 00:06:19.921 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:19.921 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.921 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.921 12:36:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3151893 00:06:19.921 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3151893 ']' 00:06:19.921 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3151893 00:06:19.921 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:19.921 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.921 12:36:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151893 00:06:19.921 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.921 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.921 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151893' 00:06:19.921 killing process with pid 3151893 00:06:19.921 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3151893 00:06:19.921 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3151893 00:06:20.183 00:06:20.183 real 0m1.381s 00:06:20.183 user 0m1.330s 00:06:20.183 sys 0m0.418s 00:06:20.183 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.183 12:36:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.183 ************************************ 00:06:20.183 END TEST dpdk_mem_utility 00:06:20.183 ************************************ 00:06:20.183 12:36:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:20.183 12:36:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.183 12:36:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.183 12:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:20.183 ************************************ 00:06:20.183 START TEST event 00:06:20.183 ************************************ 00:06:20.183 12:36:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:20.444 * Looking for test storage... 00:06:20.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:20.444 12:36:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.444 12:36:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.444 12:36:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.444 12:36:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.444 12:36:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.444 12:36:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.444 12:36:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.444 12:36:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.444 12:36:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.444 12:36:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.444 12:36:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.444 12:36:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.444 12:36:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.444 12:36:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.445 12:36:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.445 12:36:50 event -- scripts/common.sh@344 -- # case "$op" in 00:06:20.445 12:36:50 event -- scripts/common.sh@345 -- # : 1 00:06:20.445 12:36:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.445 12:36:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.445 12:36:50 event -- scripts/common.sh@365 -- # decimal 1 00:06:20.445 12:36:50 event -- scripts/common.sh@353 -- # local d=1 00:06:20.445 12:36:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.445 12:36:50 event -- scripts/common.sh@355 -- # echo 1 00:06:20.445 12:36:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.445 12:36:50 event -- scripts/common.sh@366 -- # decimal 2 00:06:20.445 12:36:50 event -- scripts/common.sh@353 -- # local d=2 00:06:20.445 12:36:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.445 12:36:50 event -- scripts/common.sh@355 -- # echo 2 00:06:20.445 12:36:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.445 12:36:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.445 12:36:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.445 12:36:50 event -- scripts/common.sh@368 -- # return 0 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.445 --rc genhtml_branch_coverage=1 00:06:20.445 --rc genhtml_function_coverage=1 00:06:20.445 --rc genhtml_legend=1 00:06:20.445 --rc geninfo_all_blocks=1 00:06:20.445 --rc geninfo_unexecuted_blocks=1 00:06:20.445 00:06:20.445 ' 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.445 --rc genhtml_branch_coverage=1 00:06:20.445 --rc genhtml_function_coverage=1 00:06:20.445 --rc genhtml_legend=1 00:06:20.445 --rc geninfo_all_blocks=1 00:06:20.445 --rc geninfo_unexecuted_blocks=1 00:06:20.445 00:06:20.445 ' 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.445 --rc genhtml_branch_coverage=1 00:06:20.445 --rc genhtml_function_coverage=1 00:06:20.445 --rc genhtml_legend=1 00:06:20.445 --rc geninfo_all_blocks=1 00:06:20.445 --rc geninfo_unexecuted_blocks=1 00:06:20.445 00:06:20.445 ' 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.445 --rc genhtml_branch_coverage=1 00:06:20.445 --rc genhtml_function_coverage=1 00:06:20.445 --rc genhtml_legend=1 00:06:20.445 --rc geninfo_all_blocks=1 00:06:20.445 --rc geninfo_unexecuted_blocks=1 00:06:20.445 00:06:20.445 ' 00:06:20.445 12:36:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:20.445 12:36:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.445 12:36:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:20.445 12:36:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.445 12:36:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.445 ************************************ 00:06:20.445 START TEST event_perf 00:06:20.445 ************************************ 00:06:20.445 12:36:50 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.445 Running I/O for 1 seconds...[2024-11-28 12:36:50.534509] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:20.445 [2024-11-28 12:36:50.534583] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152186 ] 00:06:20.709 [2024-11-28 12:36:50.671499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.709 [2024-11-28 12:36:50.724330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.709 [2024-11-28 12:36:50.744683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.709 [2024-11-28 12:36:50.744834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.709 [2024-11-28 12:36:50.744982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.709 Running I/O for 1 seconds...[2024-11-28 12:36:50.744984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.653 00:06:21.653 lcore 0: 175596 00:06:21.653 lcore 1: 175598 00:06:21.653 lcore 2: 175592 00:06:21.653 lcore 3: 175594 00:06:21.653 done. 00:06:21.653 00:06:21.653 real 0m1.254s 00:06:21.653 user 0m4.062s 00:06:21.653 sys 0m0.082s 00:06:21.653 12:36:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.653 12:36:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.653 ************************************ 00:06:21.653 END TEST event_perf 00:06:21.653 ************************************ 00:06:21.914 12:36:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:21.914 12:36:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:21.914 12:36:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.914 12:36:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.914 ************************************ 00:06:21.914 START TEST event_reactor 00:06:21.914 ************************************ 00:06:21.914 12:36:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:21.914 [2024-11-28 12:36:51.868481] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:21.914 [2024-11-28 12:36:51.868549] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152382 ] 00:06:21.914 [2024-11-28 12:36:52.002498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.174 [2024-11-28 12:36:52.054018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.174 [2024-11-28 12:36:52.070560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.115 test_start 00:06:23.115 oneshot 00:06:23.115 tick 100 00:06:23.115 tick 100 00:06:23.115 tick 250 00:06:23.115 tick 100 00:06:23.115 tick 100 00:06:23.115 tick 250 00:06:23.115 tick 100 00:06:23.115 tick 500 00:06:23.115 tick 100 00:06:23.115 tick 100 00:06:23.115 tick 250 00:06:23.115 tick 100 00:06:23.115 tick 100 00:06:23.115 test_end 00:06:23.115 00:06:23.115 real 0m1.245s 00:06:23.115 user 0m1.062s 00:06:23.115 sys 0m0.079s 00:06:23.115 12:36:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.115 12:36:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.115 ************************************ 00:06:23.115 END TEST event_reactor 00:06:23.115 ************************************ 00:06:23.115 12:36:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.115 12:36:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:23.115 12:36:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.115 12:36:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.115 ************************************ 00:06:23.115 START TEST event_reactor_perf 00:06:23.115 ************************************ 00:06:23.115 12:36:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.115 [2024-11-28 12:36:53.192412] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:23.115 [2024-11-28 12:36:53.192502] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152726 ] 00:06:23.375 [2024-11-28 12:36:53.326841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.375 [2024-11-28 12:36:53.382888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.375 [2024-11-28 12:36:53.404953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.317 test_start 00:06:24.317 test_end 00:06:24.317 Performance: 537815 events per second 00:06:24.317 00:06:24.317 real 0m1.256s 00:06:24.317 user 0m1.073s 00:06:24.318 sys 0m0.078s 00:06:24.318 12:36:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.318 12:36:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.318 ************************************ 00:06:24.318 END TEST event_reactor_perf 00:06:24.318 ************************************ 00:06:24.579 12:36:54 event -- event/event.sh@49 -- # uname -s 00:06:24.579 12:36:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.579 12:36:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.579 12:36:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.579 12:36:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.579 12:36:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.579 ************************************ 00:06:24.579 START TEST event_scheduler 00:06:24.579 ************************************ 00:06:24.579 12:36:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.579 * Looking for test storage... 00:06:24.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:24.579 12:36:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.579 12:36:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.579 12:36:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.579 12:36:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.579 12:36:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.841 12:36:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.841 --rc genhtml_branch_coverage=1 00:06:24.841 --rc genhtml_function_coverage=1 00:06:24.841 --rc genhtml_legend=1 00:06:24.841 --rc geninfo_all_blocks=1 00:06:24.841 --rc geninfo_unexecuted_blocks=1 00:06:24.841 00:06:24.841 ' 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.841 --rc genhtml_branch_coverage=1 00:06:24.841 --rc genhtml_function_coverage=1 00:06:24.841 --rc genhtml_legend=1 00:06:24.841 --rc geninfo_all_blocks=1 00:06:24.841 --rc geninfo_unexecuted_blocks=1 00:06:24.841 00:06:24.841 ' 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.841 --rc genhtml_branch_coverage=1 00:06:24.841 --rc genhtml_function_coverage=1 00:06:24.841 --rc genhtml_legend=1 00:06:24.841 --rc geninfo_all_blocks=1 00:06:24.841 --rc geninfo_unexecuted_blocks=1 00:06:24.841 00:06:24.841 ' 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.841 --rc genhtml_branch_coverage=1 00:06:24.841 --rc genhtml_function_coverage=1 00:06:24.841 --rc genhtml_legend=1 00:06:24.841 --rc geninfo_all_blocks=1 00:06:24.841 --rc geninfo_unexecuted_blocks=1 00:06:24.841 00:06:24.841 ' 00:06:24.841 12:36:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.841 12:36:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3153115 00:06:24.841 12:36:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.841 12:36:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3153115 00:06:24.841 12:36:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3153115 ']' 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.841 12:36:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.841 [2024-11-28 12:36:54.763708] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:24.841 [2024-11-28 12:36:54.763788] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153115 ] 00:06:24.841 [2024-11-28 12:36:54.901277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.841 [2024-11-28 12:36:54.958357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.103 [2024-11-28 12:36:54.990722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.103 [2024-11-28 12:36:54.990886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.103 [2024-11-28 12:36:54.991023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.103 [2024-11-28 12:36:54.991026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:25.676 12:36:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 [2024-11-28 12:36:55.583801] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:25.676 [2024-11-28 12:36:55.583819] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:25.676 [2024-11-28 12:36:55.583829] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:25.676 [2024-11-28 12:36:55.583839] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:25.676 [2024-11-28 12:36:55.583845] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 [2024-11-28 12:36:55.644810] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 ************************************ 00:06:25.676 START TEST scheduler_create_thread 00:06:25.676 ************************************ 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 2 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 3 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 4 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 5 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 6 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.676 7 00:06:25.676 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.677 8 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.677 9 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.677 12:36:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.250 10 00:06:26.250 12:36:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.250 12:36:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:26.250 12:36:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.250 12:36:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.636 12:36:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.636 12:36:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.636 12:36:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.636 12:36:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.636 12:36:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.577 12:36:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.577 12:36:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.577 12:36:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.577 12:36:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.148 12:36:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.148 12:36:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:29.148 12:36:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:29.148 12:36:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.148 12:36:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.089 12:36:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.089 00:06:30.089 real 0m4.215s 00:06:30.089 user 0m0.025s 00:06:30.089 sys 0m0.008s 00:06:30.089 12:36:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.089 12:36:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.089 ************************************ 00:06:30.089 END TEST scheduler_create_thread 00:06:30.089 ************************************ 00:06:30.089 12:36:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.089 12:36:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3153115 00:06:30.089 12:36:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3153115 ']' 00:06:30.089 12:36:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3153115 00:06:30.089 12:36:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:30.089 12:36:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.089 12:36:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153115 00:06:30.089 12:37:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:30.089 12:37:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:30.089 12:37:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153115' 00:06:30.089 killing process with pid 3153115 00:06:30.089 12:37:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3153115 00:06:30.089 12:37:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3153115 00:06:30.089 [2024-11-28 12:37:00.179872] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:30.351 00:06:30.351 real 0m5.824s 00:06:30.351 user 0m12.555s 00:06:30.351 sys 0m0.411s 00:06:30.351 12:37:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.351 12:37:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.351 ************************************ 00:06:30.351 END TEST event_scheduler 00:06:30.351 ************************************ 00:06:30.351 12:37:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:30.351 12:37:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:30.351 12:37:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.351 12:37:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.351 12:37:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.351 ************************************ 00:06:30.351 START TEST app_repeat 00:06:30.351 ************************************ 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3154231 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3154231' 00:06:30.351 Process app_repeat pid: 3154231 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:30.351 spdk_app_start Round 0 00:06:30.351 12:37:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3154231 /var/tmp/spdk-nbd.sock 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3154231 ']' 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.351 12:37:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.351 [2024-11-28 12:37:00.462457] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:30.351 [2024-11-28 12:37:00.462582] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154231 ] 00:06:30.614 [2024-11-28 12:37:00.605631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.614 [2024-11-28 12:37:00.659191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.614 [2024-11-28 12:37:00.683433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.614 [2024-11-28 12:37:00.683526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.185 12:37:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.185 12:37:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.185 12:37:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.446 Malloc0 00:06:31.446 12:37:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.706 Malloc1 00:06:31.706 12:37:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.706 12:37:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.967 /dev/nbd0 00:06:31.967 12:37:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.967 12:37:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.967 1+0 records in 00:06:31.967 1+0 records out 00:06:31.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274082 s, 14.9 MB/s 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.967 12:37:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.967 12:37:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.967 12:37:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.967 12:37:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.967 /dev/nbd1 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.229 1+0 records in 00:06:32.229 1+0 records out 00:06:32.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027956 s, 14.7 MB/s 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.229 12:37:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.229 { 00:06:32.229 "nbd_device": "/dev/nbd0", 00:06:32.229 "bdev_name": "Malloc0" 00:06:32.229 }, 00:06:32.229 { 00:06:32.229 "nbd_device": "/dev/nbd1", 00:06:32.229 "bdev_name": "Malloc1" 00:06:32.229 } 00:06:32.229 ]' 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.229 { 00:06:32.229 "nbd_device": "/dev/nbd0", 00:06:32.229 "bdev_name": "Malloc0" 00:06:32.229 }, 00:06:32.229 { 00:06:32.229 "nbd_device": "/dev/nbd1", 00:06:32.229 "bdev_name": "Malloc1" 00:06:32.229 } 00:06:32.229 ]' 00:06:32.229 12:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.490 /dev/nbd1' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.490 /dev/nbd1' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.490 256+0 records in 00:06:32.490 256+0 records out 00:06:32.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121387 s, 86.4 MB/s 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.490 256+0 records in 00:06:32.490 256+0 records out 00:06:32.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118426 s, 88.5 MB/s 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.490 256+0 records in 00:06:32.490 256+0 records out 00:06:32.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131815 s, 79.5 MB/s 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.490 12:37:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.752 12:37:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.013 12:37:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.013 12:37:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.274 12:37:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.274 [2024-11-28 12:37:03.352023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.274 [2024-11-28 12:37:03.367634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.274 [2024-11-28 12:37:03.367635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.274 [2024-11-28 12:37:03.396804] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.274 [2024-11-28 12:37:03.396834] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.574 12:37:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.575 12:37:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:36.575 spdk_app_start Round 1 00:06:36.575 12:37:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3154231 /var/tmp/spdk-nbd.sock 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3154231 ']' 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.575 12:37:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:36.575 12:37:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.575 Malloc0 00:06:36.575 12:37:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.835 Malloc1 00:06:36.835 12:37:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.835 12:37:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.836 12:37:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.836 12:37:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.836 12:37:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.836 12:37:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.836 12:37:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.836 12:37:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.097 /dev/nbd0 00:06:37.097 12:37:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.097 12:37:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.097 1+0 records in 00:06:37.097 1+0 records out 00:06:37.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188104 s, 21.8 MB/s 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.097 12:37:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.097 12:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.097 12:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.097 12:37:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.360 /dev/nbd1 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.360 1+0 records in 00:06:37.360 1+0 records out 00:06:37.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275295 s, 14.9 MB/s 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.360 12:37:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.360 12:37:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.621 { 00:06:37.621 "nbd_device": "/dev/nbd0", 00:06:37.621 "bdev_name": "Malloc0" 00:06:37.621 }, 00:06:37.621 { 00:06:37.621 "nbd_device": "/dev/nbd1", 00:06:37.621 "bdev_name": "Malloc1" 00:06:37.621 } 00:06:37.621 ]' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.621 { 00:06:37.621 "nbd_device": "/dev/nbd0", 00:06:37.621 "bdev_name": "Malloc0" 00:06:37.621 }, 00:06:37.621 { 00:06:37.621 "nbd_device": "/dev/nbd1", 00:06:37.621 "bdev_name": "Malloc1" 00:06:37.621 } 00:06:37.621 ]' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.621 /dev/nbd1' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.621 /dev/nbd1' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.621 256+0 records in 00:06:37.621 256+0 records out 00:06:37.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124545 s, 84.2 MB/s 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.621 12:37:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.621 256+0 records in 00:06:37.621 256+0 records out 00:06:37.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123025 s, 85.2 MB/s 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.622 256+0 records in 00:06:37.622 256+0 records out 00:06:37.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130719 s, 80.2 MB/s 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.622 12:37:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.887 12:37:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.887 12:37:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.149 12:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.150 12:37:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.150 12:37:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.150 12:37:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.150 12:37:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.150 12:37:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.150 12:37:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.410 12:37:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.410 [2024-11-28 12:37:08.516419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.410 [2024-11-28 12:37:08.532020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.410 [2024-11-28 12:37:08.532022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.671 [2024-11-28 12:37:08.561923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.671 [2024-11-28 12:37:08.561963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.973 12:37:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.973 12:37:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:41.973 spdk_app_start Round 2 00:06:41.973 12:37:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3154231 /var/tmp/spdk-nbd.sock 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3154231 ']' 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.973 12:37:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:41.973 12:37:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.973 Malloc0 00:06:41.973 12:37:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.973 Malloc1 00:06:41.973 12:37:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.973 12:37:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.233 /dev/nbd0 00:06:42.233 12:37:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.233 12:37:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.233 1+0 records in 00:06:42.233 1+0 records out 00:06:42.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205978 s, 19.9 MB/s 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.233 12:37:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:42.233 12:37:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.233 12:37:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.233 12:37:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.494 /dev/nbd1 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.494 1+0 records in 00:06:42.494 1+0 records out 00:06:42.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281725 s, 14.5 MB/s 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.494 12:37:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.494 12:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.755 { 00:06:42.755 "nbd_device": "/dev/nbd0", 00:06:42.755 "bdev_name": "Malloc0" 00:06:42.755 }, 00:06:42.755 { 00:06:42.755 "nbd_device": "/dev/nbd1", 00:06:42.755 "bdev_name": "Malloc1" 00:06:42.755 } 00:06:42.755 ]' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.755 { 00:06:42.755 "nbd_device": "/dev/nbd0", 00:06:42.755 "bdev_name": "Malloc0" 00:06:42.755 }, 00:06:42.755 { 00:06:42.755 "nbd_device": "/dev/nbd1", 00:06:42.755 "bdev_name": "Malloc1" 00:06:42.755 } 00:06:42.755 ]' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.755 /dev/nbd1' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.755 /dev/nbd1' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.755 256+0 records in 00:06:42.755 256+0 records out 00:06:42.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124732 s, 84.1 MB/s 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.755 256+0 records in 00:06:42.755 256+0 records out 00:06:42.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012346 s, 84.9 MB/s 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.755 256+0 records in 00:06:42.755 256+0 records out 00:06:42.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012897 s, 81.3 MB/s 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:37:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.016 12:37:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.277 12:37:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.537 12:37:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.538 12:37:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.538 12:37:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.798 [2024-11-28 12:37:13.694353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.798 [2024-11-28 12:37:13.710006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.798 [2024-11-28 12:37:13.710008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.798 [2024-11-28 12:37:13.739325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.798 [2024-11-28 12:37:13.739354] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.101 12:37:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3154231 /var/tmp/spdk-nbd.sock 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3154231 ']' 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:47.101 12:37:16 event.app_repeat -- event/event.sh@39 -- # killprocess 3154231 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3154231 ']' 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3154231 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154231 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154231' 00:06:47.101 killing process with pid 3154231 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3154231 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3154231 00:06:47.101 spdk_app_start is called in Round 0. 00:06:47.101 Shutdown signal received, stop current app iteration 00:06:47.101 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:06:47.101 spdk_app_start is called in Round 1. 00:06:47.101 Shutdown signal received, stop current app iteration 00:06:47.101 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:06:47.101 spdk_app_start is called in Round 2. 00:06:47.101 Shutdown signal received, stop current app iteration 00:06:47.101 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:06:47.101 spdk_app_start is called in Round 3. 00:06:47.101 Shutdown signal received, stop current app iteration 00:06:47.101 12:37:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:47.101 12:37:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:47.101 00:06:47.101 real 0m16.541s 00:06:47.101 user 0m36.173s 00:06:47.101 sys 0m2.395s 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.101 12:37:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.101 ************************************ 00:06:47.101 END TEST app_repeat 00:06:47.101 ************************************ 00:06:47.101 12:37:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:47.101 12:37:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:47.101 12:37:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.101 12:37:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.101 12:37:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.101 ************************************ 00:06:47.101 START TEST cpu_locks 00:06:47.101 ************************************ 00:06:47.101 12:37:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:47.102 * Looking for test storage... 00:06:47.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:47.102 12:37:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.102 12:37:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.102 12:37:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.102 12:37:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.102 12:37:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.363 12:37:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.363 --rc genhtml_branch_coverage=1 00:06:47.363 --rc genhtml_function_coverage=1 00:06:47.363 --rc genhtml_legend=1 00:06:47.363 --rc geninfo_all_blocks=1 00:06:47.363 --rc geninfo_unexecuted_blocks=1 00:06:47.363 00:06:47.363 ' 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.363 --rc genhtml_branch_coverage=1 00:06:47.363 --rc genhtml_function_coverage=1 00:06:47.363 --rc genhtml_legend=1 00:06:47.363 --rc geninfo_all_blocks=1 00:06:47.363 --rc geninfo_unexecuted_blocks=1 00:06:47.363 00:06:47.363 ' 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.363 --rc genhtml_branch_coverage=1 00:06:47.363 --rc genhtml_function_coverage=1 00:06:47.363 --rc genhtml_legend=1 00:06:47.363 --rc geninfo_all_blocks=1 00:06:47.363 --rc geninfo_unexecuted_blocks=1 00:06:47.363 00:06:47.363 ' 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.363 --rc genhtml_branch_coverage=1 00:06:47.363 --rc genhtml_function_coverage=1 00:06:47.363 --rc genhtml_legend=1 00:06:47.363 --rc geninfo_all_blocks=1 00:06:47.363 --rc geninfo_unexecuted_blocks=1 00:06:47.363 00:06:47.363 ' 00:06:47.363 12:37:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:47.363 12:37:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:47.363 12:37:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:47.363 12:37:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.363 12:37:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.363 ************************************ 00:06:47.363 START TEST default_locks 00:06:47.363 ************************************ 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3157796 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3157796 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3157796 ']' 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.363 12:37:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.363 [2024-11-28 12:37:17.339671] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:47.363 [2024-11-28 12:37:17.339740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157796 ] 00:06:47.363 [2024-11-28 12:37:17.476539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.624 [2024-11-28 12:37:17.529838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.624 [2024-11-28 12:37:17.546919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.194 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.194 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:48.194 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3157796 00:06:48.194 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3157796 00:06:48.194 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.765 lslocks: write error 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3157796 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3157796 ']' 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3157796 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157796 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.765 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.766 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157796' 00:06:48.766 killing process with pid 3157796 00:06:48.766 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3157796 00:06:48.766 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3157796 00:06:49.026 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3157796 00:06:49.026 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:49.026 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3157796 00:06:49.026 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3157796 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3157796 ']' 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3157796) - No such process 00:06:49.027 ERROR: process (pid: 3157796) is no longer running 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.027 00:06:49.027 real 0m1.654s 00:06:49.027 user 0m1.689s 00:06:49.027 sys 0m0.583s 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.027 12:37:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.027 ************************************ 00:06:49.027 END TEST default_locks 00:06:49.027 ************************************ 00:06:49.027 12:37:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:49.027 12:37:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.027 12:37:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.027 12:37:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.027 ************************************ 00:06:49.027 START TEST default_locks_via_rpc 00:06:49.027 ************************************ 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3158170 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3158170 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3158170 ']' 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.027 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.027 [2024-11-28 12:37:19.071283] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:49.027 [2024-11-28 12:37:19.071345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158170 ] 00:06:49.287 [2024-11-28 12:37:19.207731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.287 [2024-11-28 12:37:19.260212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.287 [2024-11-28 12:37:19.280297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.859 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.859 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.859 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3158170 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.860 12:37:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3158170 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3158170 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3158170 ']' 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3158170 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158170 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158170' 00:06:50.432 killing process with pid 3158170 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3158170 00:06:50.432 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3158170 00:06:50.693 00:06:50.693 real 0m1.570s 00:06:50.693 user 0m1.575s 00:06:50.693 sys 0m0.572s 00:06:50.693 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.693 12:37:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.693 ************************************ 00:06:50.693 END TEST default_locks_via_rpc 00:06:50.693 ************************************ 00:06:50.693 12:37:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:50.693 12:37:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.693 12:37:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.693 12:37:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.693 ************************************ 00:06:50.693 START TEST non_locking_app_on_locked_coremask 00:06:50.693 ************************************ 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3158524 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3158524 /var/tmp/spdk.sock 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3158524 ']' 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.693 12:37:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.693 [2024-11-28 12:37:20.719342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:50.693 [2024-11-28 12:37:20.719404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158524 ] 00:06:50.953 [2024-11-28 12:37:20.856511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.953 [2024-11-28 12:37:20.909613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.953 [2024-11-28 12:37:20.927212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3158845 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3158845 /var/tmp/spdk2.sock 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3158845 ']' 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.628 12:37:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.628 [2024-11-28 12:37:21.560870] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:51.628 [2024-11-28 12:37:21.560927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158845 ] 00:06:51.628 [2024-11-28 12:37:21.695550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.628 [2024-11-28 12:37:21.751259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.628 [2024-11-28 12:37:21.751276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.904 [2024-11-28 12:37:21.783847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.473 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.473 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.473 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3158524 00:06:52.473 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3158524 00:06:52.473 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.042 lslocks: write error 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3158524 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3158524 ']' 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3158524 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158524 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158524' 00:06:53.042 killing process with pid 3158524 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3158524 00:06:53.042 12:37:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3158524 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3158845 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3158845 ']' 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3158845 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158845 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158845' 00:06:53.301 killing process with pid 3158845 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3158845 00:06:53.301 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3158845 00:06:53.560 00:06:53.561 real 0m2.931s 00:06:53.561 user 0m3.159s 00:06:53.561 sys 0m0.896s 00:06:53.561 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.561 12:37:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.561 ************************************ 00:06:53.561 END TEST non_locking_app_on_locked_coremask 00:06:53.561 ************************************ 00:06:53.561 12:37:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:53.561 12:37:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.561 12:37:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.561 12:37:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.561 ************************************ 00:06:53.561 START TEST locking_app_on_unlocked_coremask 00:06:53.561 ************************************ 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3159231 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3159231 /var/tmp/spdk.sock 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3159231 ']' 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.561 12:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.820 [2024-11-28 12:37:23.727595] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:53.820 [2024-11-28 12:37:23.727647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159231 ] 00:06:53.820 [2024-11-28 12:37:23.861443] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.820 [2024-11-28 12:37:23.915702] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.820 [2024-11-28 12:37:23.915721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.820 [2024-11-28 12:37:23.932877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3159458 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3159458 /var/tmp/spdk2.sock 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3159458 ']' 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.389 12:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.649 [2024-11-28 12:37:24.566670] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:54.649 [2024-11-28 12:37:24.566726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159458 ] 00:06:54.649 [2024-11-28 12:37:24.700553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.649 [2024-11-28 12:37:24.756876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.909 [2024-11-28 12:37:24.789385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3159458 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3159458 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.481 lslocks: write error 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3159231 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3159231 ']' 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3159231 00:06:55.481 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.742 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.742 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159231 00:06:55.743 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.743 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.743 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159231' 00:06:55.743 killing process with pid 3159231 00:06:55.743 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3159231 00:06:55.743 12:37:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3159231 00:06:56.002 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3159458 00:06:56.002 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3159458 ']' 00:06:56.002 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3159458 00:06:56.002 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.002 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.002 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159458 00:06:56.003 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.003 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.003 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159458' 00:06:56.003 killing process with pid 3159458 00:06:56.003 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3159458 00:06:56.003 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3159458 00:06:56.265 00:06:56.265 real 0m2.604s 00:06:56.265 user 0m2.808s 00:06:56.265 sys 0m0.792s 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.265 ************************************ 00:06:56.265 END TEST locking_app_on_unlocked_coremask 00:06:56.265 ************************************ 00:06:56.265 12:37:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:56.265 12:37:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.265 12:37:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.265 12:37:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.265 ************************************ 00:06:56.265 START TEST locking_app_on_locked_coremask 00:06:56.265 ************************************ 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3159907 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3159907 /var/tmp/spdk.sock 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3159907 ']' 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.265 12:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.526 [2024-11-28 12:37:26.408249] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:56.526 [2024-11-28 12:37:26.408309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159907 ] 00:06:56.526 [2024-11-28 12:37:26.544831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.526 [2024-11-28 12:37:26.598267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.526 [2024-11-28 12:37:26.615208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3159948 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3159948 /var/tmp/spdk2.sock 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3159948 /var/tmp/spdk2.sock 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3159948 /var/tmp/spdk2.sock 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3159948 ']' 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.100 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.361 [2024-11-28 12:37:27.274309] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:57.361 [2024-11-28 12:37:27.274378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159948 ] 00:06:57.361 [2024-11-28 12:37:27.409636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.361 [2024-11-28 12:37:27.467208] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3159907 has claimed it. 00:06:57.361 [2024-11-28 12:37:27.467241] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3159948) - No such process 00:06:57.931 ERROR: process (pid: 3159948) is no longer running 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3159907 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3159907 00:06:57.931 12:37:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.500 lslocks: write error 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3159907 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3159907 ']' 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3159907 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159907 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159907' 00:06:58.500 killing process with pid 3159907 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3159907 00:06:58.500 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3159907 00:06:58.761 00:06:58.761 real 0m2.426s 00:06:58.761 user 0m2.641s 00:06:58.761 sys 0m0.687s 00:06:58.761 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.761 12:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 ************************************ 00:06:58.761 END TEST locking_app_on_locked_coremask 00:06:58.761 ************************************ 00:06:58.761 12:37:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:58.761 12:37:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.761 12:37:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.761 12:37:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 ************************************ 00:06:58.761 START TEST locking_overlapped_coremask 00:06:58.761 ************************************ 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3160317 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3160317 /var/tmp/spdk.sock 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3160317 ']' 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.761 12:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.021 [2024-11-28 12:37:28.909327] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:59.021 [2024-11-28 12:37:28.909379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160317 ] 00:06:59.021 [2024-11-28 12:37:29.043519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.021 [2024-11-28 12:37:29.097106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.021 [2024-11-28 12:37:29.123809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.021 [2024-11-28 12:37:29.123937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.021 [2024-11-28 12:37:29.123939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3160600 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3160600 /var/tmp/spdk2.sock 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3160600 /var/tmp/spdk2.sock 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3160600 /var/tmp/spdk2.sock 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3160600 ']' 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.593 12:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.854 [2024-11-28 12:37:29.757988] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:59.854 [2024-11-28 12:37:29.758042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160600 ] 00:06:59.854 [2024-11-28 12:37:29.891984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.854 [2024-11-28 12:37:29.971259] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3160317 has claimed it. 00:06:59.854 [2024-11-28 12:37:29.971292] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3160600) - No such process 00:07:00.426 ERROR: process (pid: 3160600) is no longer running 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3160317 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3160317 ']' 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3160317 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160317 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160317' 00:07:00.426 killing process with pid 3160317 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3160317 00:07:00.426 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3160317 00:07:00.687 00:07:00.687 real 0m1.760s 00:07:00.687 user 0m4.797s 00:07:00.687 sys 0m0.399s 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.687 ************************************ 00:07:00.687 END TEST locking_overlapped_coremask 00:07:00.687 ************************************ 00:07:00.687 12:37:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:00.687 12:37:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.687 12:37:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.687 12:37:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.687 ************************************ 00:07:00.687 START TEST locking_overlapped_coremask_via_rpc 00:07:00.687 ************************************ 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3160689 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3160689 /var/tmp/spdk.sock 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3160689 ']' 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.687 12:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.687 [2024-11-28 12:37:30.741757] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:00.687 [2024-11-28 12:37:30.741808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160689 ] 00:07:00.965 [2024-11-28 12:37:30.876408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.965 [2024-11-28 12:37:30.931708] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.965 [2024-11-28 12:37:30.931736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.965 [2024-11-28 12:37:30.957392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.965 [2024-11-28 12:37:30.957456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.965 [2024-11-28 12:37:30.957457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3161018 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3161018 /var/tmp/spdk2.sock 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3161018 ']' 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.537 12:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.537 [2024-11-28 12:37:31.592230] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:01.537 [2024-11-28 12:37:31.592286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161018 ] 00:07:01.798 [2024-11-28 12:37:31.728781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.798 [2024-11-28 12:37:31.809214] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.798 [2024-11-28 12:37:31.809237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.798 [2024-11-28 12:37:31.849637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.798 [2024-11-28 12:37:31.853281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.798 [2024-11-28 12:37:31.853282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.369 [2024-11-28 12:37:32.402239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3160689 has claimed it. 00:07:02.369 request: 00:07:02.369 { 00:07:02.369 "method": "framework_enable_cpumask_locks", 00:07:02.369 "req_id": 1 00:07:02.369 } 00:07:02.369 Got JSON-RPC error response 00:07:02.369 response: 00:07:02.369 { 00:07:02.369 "code": -32603, 00:07:02.369 "message": "Failed to claim CPU core: 2" 00:07:02.369 } 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3160689 /var/tmp/spdk.sock 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3160689 ']' 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.369 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3161018 /var/tmp/spdk2.sock 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3161018 ']' 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.631 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.892 00:07:02.892 real 0m2.086s 00:07:02.892 user 0m0.849s 00:07:02.892 sys 0m0.159s 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.892 12:37:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.892 ************************************ 00:07:02.892 END TEST locking_overlapped_coremask_via_rpc 00:07:02.892 ************************************ 00:07:02.892 12:37:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.892 12:37:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3160689 ]] 00:07:02.892 12:37:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3160689 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3160689 ']' 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3160689 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160689 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160689' 00:07:02.892 killing process with pid 3160689 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3160689 00:07:02.892 12:37:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3160689 00:07:03.183 12:37:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3161018 ]] 00:07:03.183 12:37:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3161018 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3161018 ']' 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3161018 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161018 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161018' 00:07:03.183 killing process with pid 3161018 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3161018 00:07:03.183 12:37:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3161018 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3160689 ]] 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3160689 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3160689 ']' 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3160689 00:07:03.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3160689) - No such process 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3160689 is not found' 00:07:03.444 Process with pid 3160689 is not found 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3161018 ]] 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3161018 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3161018 ']' 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3161018 00:07:03.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3161018) - No such process 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3161018 is not found' 00:07:03.444 Process with pid 3161018 is not found 00:07:03.444 12:37:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.444 00:07:03.444 real 0m16.309s 00:07:03.444 user 0m27.146s 00:07:03.444 sys 0m5.056s 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.444 12:37:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 ************************************ 00:07:03.444 END TEST cpu_locks 00:07:03.444 ************************************ 00:07:03.444 00:07:03.444 real 0m43.120s 00:07:03.444 user 1m22.378s 00:07:03.444 sys 0m8.525s 00:07:03.444 12:37:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.444 12:37:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 ************************************ 00:07:03.444 END TEST event 00:07:03.444 ************************************ 00:07:03.444 12:37:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:03.444 12:37:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.444 12:37:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.444 12:37:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 ************************************ 00:07:03.444 START TEST thread 00:07:03.444 ************************************ 00:07:03.444 12:37:33 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:03.444 * Looking for test storage... 00:07:03.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:03.444 12:37:33 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.444 12:37:33 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.444 12:37:33 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.705 12:37:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.705 12:37:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.705 12:37:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.705 12:37:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.705 12:37:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.705 12:37:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.705 12:37:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.705 12:37:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.705 12:37:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.705 12:37:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.705 12:37:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.705 12:37:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:03.705 12:37:33 thread -- scripts/common.sh@345 -- # : 1 00:07:03.705 12:37:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.705 12:37:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.705 12:37:33 thread -- scripts/common.sh@365 -- # decimal 1 00:07:03.705 12:37:33 thread -- scripts/common.sh@353 -- # local d=1 00:07:03.705 12:37:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.705 12:37:33 thread -- scripts/common.sh@355 -- # echo 1 00:07:03.705 12:37:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.705 12:37:33 thread -- scripts/common.sh@366 -- # decimal 2 00:07:03.705 12:37:33 thread -- scripts/common.sh@353 -- # local d=2 00:07:03.705 12:37:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.705 12:37:33 thread -- scripts/common.sh@355 -- # echo 2 00:07:03.705 12:37:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.705 12:37:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.705 12:37:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.705 12:37:33 thread -- scripts/common.sh@368 -- # return 0 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.705 --rc genhtml_branch_coverage=1 00:07:03.705 --rc genhtml_function_coverage=1 00:07:03.705 --rc genhtml_legend=1 00:07:03.705 --rc geninfo_all_blocks=1 00:07:03.705 --rc geninfo_unexecuted_blocks=1 00:07:03.705 00:07:03.705 ' 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.705 --rc genhtml_branch_coverage=1 00:07:03.705 --rc genhtml_function_coverage=1 00:07:03.705 --rc genhtml_legend=1 00:07:03.705 --rc geninfo_all_blocks=1 00:07:03.705 --rc geninfo_unexecuted_blocks=1 00:07:03.705 00:07:03.705 ' 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.705 --rc genhtml_branch_coverage=1 00:07:03.705 --rc genhtml_function_coverage=1 00:07:03.705 --rc genhtml_legend=1 00:07:03.705 --rc geninfo_all_blocks=1 00:07:03.705 --rc geninfo_unexecuted_blocks=1 00:07:03.705 00:07:03.705 ' 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.705 --rc genhtml_branch_coverage=1 00:07:03.705 --rc genhtml_function_coverage=1 00:07:03.705 --rc genhtml_legend=1 00:07:03.705 --rc geninfo_all_blocks=1 00:07:03.705 --rc geninfo_unexecuted_blocks=1 00:07:03.705 00:07:03.705 ' 00:07:03.705 12:37:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.705 12:37:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 ************************************ 00:07:03.705 START TEST thread_poller_perf 00:07:03.705 ************************************ 00:07:03.706 12:37:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.706 [2024-11-28 12:37:33.725438] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:03.706 [2024-11-28 12:37:33.725528] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161468 ] 00:07:03.966 [2024-11-28 12:37:33.860306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.966 [2024-11-28 12:37:33.916701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.966 [2024-11-28 12:37:33.933841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.966 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.909 [2024-11-28T11:37:35.036Z] ====================================== 00:07:04.909 [2024-11-28T11:37:35.036Z] busy:2403542368 (cyc) 00:07:04.909 [2024-11-28T11:37:35.036Z] total_run_count: 412000 00:07:04.909 [2024-11-28T11:37:35.036Z] tsc_hz: 2394400000 (cyc) 00:07:04.909 [2024-11-28T11:37:35.036Z] ====================================== 00:07:04.909 [2024-11-28T11:37:35.036Z] poller_cost: 5833 (cyc), 2436 (nsec) 00:07:04.909 00:07:04.909 real 0m1.259s 00:07:04.909 user 0m1.081s 00:07:04.909 sys 0m0.074s 00:07:04.909 12:37:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.909 12:37:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.909 ************************************ 00:07:04.909 END TEST thread_poller_perf 00:07:04.909 ************************************ 00:07:04.909 12:37:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.909 12:37:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:04.909 12:37:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.909 12:37:34 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.168 ************************************ 00:07:05.168 START TEST thread_poller_perf 00:07:05.168 ************************************ 00:07:05.168 12:37:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.168 [2024-11-28 12:37:35.060941] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:05.168 [2024-11-28 12:37:35.061030] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161822 ] 00:07:05.168 [2024-11-28 12:37:35.195631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.168 [2024-11-28 12:37:35.249335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.168 [2024-11-28 12:37:35.265083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.168 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.553 [2024-11-28T11:37:36.680Z] ====================================== 00:07:06.553 [2024-11-28T11:37:36.680Z] busy:2395679338 (cyc) 00:07:06.553 [2024-11-28T11:37:36.680Z] total_run_count: 5551000 00:07:06.553 [2024-11-28T11:37:36.680Z] tsc_hz: 2394400000 (cyc) 00:07:06.553 [2024-11-28T11:37:36.680Z] ====================================== 00:07:06.553 [2024-11-28T11:37:36.680Z] poller_cost: 431 (cyc), 180 (nsec) 00:07:06.553 00:07:06.553 real 0m1.248s 00:07:06.553 user 0m1.066s 00:07:06.553 sys 0m0.078s 00:07:06.553 12:37:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.553 12:37:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.553 ************************************ 00:07:06.553 END TEST thread_poller_perf 00:07:06.553 ************************************ 00:07:06.553 12:37:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.553 00:07:06.553 real 0m2.862s 00:07:06.553 user 0m2.329s 00:07:06.553 sys 0m0.346s 00:07:06.553 12:37:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.553 12:37:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.553 ************************************ 00:07:06.553 END TEST thread 00:07:06.553 ************************************ 00:07:06.553 12:37:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:06.553 12:37:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:06.553 12:37:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.553 12:37:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.553 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.553 ************************************ 00:07:06.553 START TEST app_cmdline 00:07:06.553 ************************************ 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:06.553 * Looking for test storage... 00:07:06.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.553 12:37:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.553 --rc genhtml_branch_coverage=1 00:07:06.553 --rc genhtml_function_coverage=1 00:07:06.553 --rc genhtml_legend=1 00:07:06.553 --rc geninfo_all_blocks=1 00:07:06.553 --rc geninfo_unexecuted_blocks=1 00:07:06.553 00:07:06.553 ' 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.553 --rc genhtml_branch_coverage=1 00:07:06.553 --rc genhtml_function_coverage=1 00:07:06.553 --rc genhtml_legend=1 00:07:06.553 --rc geninfo_all_blocks=1 00:07:06.553 --rc geninfo_unexecuted_blocks=1 00:07:06.553 00:07:06.553 ' 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.553 --rc genhtml_branch_coverage=1 00:07:06.553 --rc genhtml_function_coverage=1 00:07:06.553 --rc genhtml_legend=1 00:07:06.553 --rc geninfo_all_blocks=1 00:07:06.553 --rc geninfo_unexecuted_blocks=1 00:07:06.553 00:07:06.553 ' 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.553 --rc genhtml_branch_coverage=1 00:07:06.553 --rc genhtml_function_coverage=1 00:07:06.553 --rc genhtml_legend=1 00:07:06.553 --rc geninfo_all_blocks=1 00:07:06.553 --rc geninfo_unexecuted_blocks=1 00:07:06.553 00:07:06.553 ' 00:07:06.553 12:37:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:06.553 12:37:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3162224 00:07:06.553 12:37:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3162224 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3162224 ']' 00:07:06.553 12:37:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.553 12:37:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.553 [2024-11-28 12:37:36.665942] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:06.553 [2024-11-28 12:37:36.666000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162224 ] 00:07:06.814 [2024-11-28 12:37:36.792139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.814 [2024-11-28 12:37:36.847391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.814 [2024-11-28 12:37:36.870608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.384 12:37:37 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.384 12:37:37 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:07.384 12:37:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:07.644 { 00:07:07.644 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:07.644 "fields": { 00:07:07.644 "major": 25, 00:07:07.644 "minor": 1, 00:07:07.644 "patch": 0, 00:07:07.644 "suffix": "-pre", 00:07:07.644 "commit": "35cd3e84d" 00:07:07.644 } 00:07:07.644 } 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:07.644 12:37:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.644 12:37:37 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.905 request: 00:07:07.905 { 00:07:07.905 "method": "env_dpdk_get_mem_stats", 00:07:07.905 "req_id": 1 00:07:07.905 } 00:07:07.905 Got JSON-RPC error response 00:07:07.905 response: 00:07:07.905 { 00:07:07.905 "code": -32601, 00:07:07.905 "message": "Method not found" 00:07:07.905 } 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.905 12:37:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3162224 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3162224 ']' 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3162224 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162224 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162224' 00:07:07.905 killing process with pid 3162224 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 3162224 00:07:07.905 12:37:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 3162224 00:07:08.166 00:07:08.166 real 0m1.677s 00:07:08.166 user 0m1.873s 00:07:08.166 sys 0m0.466s 00:07:08.166 12:37:38 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.166 12:37:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 ************************************ 00:07:08.166 END TEST app_cmdline 00:07:08.166 ************************************ 00:07:08.166 12:37:38 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.166 12:37:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.166 12:37:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.166 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:07:08.166 ************************************ 00:07:08.166 START TEST version 00:07:08.166 ************************************ 00:07:08.166 12:37:38 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.166 * Looking for test storage... 00:07:08.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.166 12:37:38 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.166 12:37:38 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.166 12:37:38 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.427 12:37:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.427 12:37:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.427 12:37:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.427 12:37:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.427 12:37:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.427 12:37:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.427 12:37:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.427 12:37:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.427 12:37:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.427 12:37:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.427 12:37:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.427 12:37:38 version -- scripts/common.sh@344 -- # case "$op" in 00:07:08.427 12:37:38 version -- scripts/common.sh@345 -- # : 1 00:07:08.427 12:37:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.427 12:37:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.427 12:37:38 version -- scripts/common.sh@365 -- # decimal 1 00:07:08.427 12:37:38 version -- scripts/common.sh@353 -- # local d=1 00:07:08.427 12:37:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.427 12:37:38 version -- scripts/common.sh@355 -- # echo 1 00:07:08.427 12:37:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.427 12:37:38 version -- scripts/common.sh@366 -- # decimal 2 00:07:08.427 12:37:38 version -- scripts/common.sh@353 -- # local d=2 00:07:08.427 12:37:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.427 12:37:38 version -- scripts/common.sh@355 -- # echo 2 00:07:08.427 12:37:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.427 12:37:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.427 12:37:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.427 12:37:38 version -- scripts/common.sh@368 -- # return 0 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.427 --rc genhtml_branch_coverage=1 00:07:08.427 --rc genhtml_function_coverage=1 00:07:08.427 --rc genhtml_legend=1 00:07:08.427 --rc geninfo_all_blocks=1 00:07:08.427 --rc geninfo_unexecuted_blocks=1 00:07:08.427 00:07:08.427 ' 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.427 --rc genhtml_branch_coverage=1 00:07:08.427 --rc genhtml_function_coverage=1 00:07:08.427 --rc genhtml_legend=1 00:07:08.427 --rc geninfo_all_blocks=1 00:07:08.427 --rc geninfo_unexecuted_blocks=1 00:07:08.427 00:07:08.427 ' 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.427 --rc genhtml_branch_coverage=1 00:07:08.427 --rc genhtml_function_coverage=1 00:07:08.427 --rc genhtml_legend=1 00:07:08.427 --rc geninfo_all_blocks=1 00:07:08.427 --rc geninfo_unexecuted_blocks=1 00:07:08.427 00:07:08.427 ' 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.427 --rc genhtml_branch_coverage=1 00:07:08.427 --rc genhtml_function_coverage=1 00:07:08.427 --rc genhtml_legend=1 00:07:08.427 --rc geninfo_all_blocks=1 00:07:08.427 --rc geninfo_unexecuted_blocks=1 00:07:08.427 00:07:08.427 ' 00:07:08.427 12:37:38 version -- app/version.sh@17 -- # get_header_version major 00:07:08.427 12:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # cut -f2 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.427 12:37:38 version -- app/version.sh@17 -- # major=25 00:07:08.427 12:37:38 version -- app/version.sh@18 -- # get_header_version minor 00:07:08.427 12:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # cut -f2 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.427 12:37:38 version -- app/version.sh@18 -- # minor=1 00:07:08.427 12:37:38 version -- app/version.sh@19 -- # get_header_version patch 00:07:08.427 12:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # cut -f2 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.427 12:37:38 version -- app/version.sh@19 -- # patch=0 00:07:08.427 12:37:38 version -- app/version.sh@20 -- # get_header_version suffix 00:07:08.427 12:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # cut -f2 00:07:08.427 12:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.427 12:37:38 version -- app/version.sh@20 -- # suffix=-pre 00:07:08.427 12:37:38 version -- app/version.sh@22 -- # version=25.1 00:07:08.427 12:37:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.427 12:37:38 version -- app/version.sh@28 -- # version=25.1rc0 00:07:08.427 12:37:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.427 12:37:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.427 12:37:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:08.427 12:37:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:08.427 00:07:08.427 real 0m0.282s 00:07:08.427 user 0m0.175s 00:07:08.427 sys 0m0.157s 00:07:08.427 12:37:38 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.427 12:37:38 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.427 ************************************ 00:07:08.427 END TEST version 00:07:08.427 ************************************ 00:07:08.427 12:37:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:08.427 12:37:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:08.427 12:37:38 -- spdk/autotest.sh@194 -- # uname -s 00:07:08.427 12:37:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:08.427 12:37:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.427 12:37:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.427 12:37:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:08.427 12:37:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:08.427 12:37:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:08.428 12:37:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.428 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:07:08.428 12:37:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:08.428 12:37:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:08.428 12:37:38 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:08.428 12:37:38 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:08.428 12:37:38 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:08.428 12:37:38 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:08.428 12:37:38 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.428 12:37:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.428 12:37:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.428 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:07:08.689 ************************************ 00:07:08.689 START TEST nvmf_tcp 00:07:08.689 ************************************ 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.689 * Looking for test storage... 00:07:08.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.689 12:37:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.689 --rc genhtml_branch_coverage=1 00:07:08.689 --rc genhtml_function_coverage=1 00:07:08.689 --rc genhtml_legend=1 00:07:08.689 --rc geninfo_all_blocks=1 00:07:08.689 --rc geninfo_unexecuted_blocks=1 00:07:08.689 00:07:08.689 ' 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.689 --rc genhtml_branch_coverage=1 00:07:08.689 --rc genhtml_function_coverage=1 00:07:08.689 --rc genhtml_legend=1 00:07:08.689 --rc geninfo_all_blocks=1 00:07:08.689 --rc geninfo_unexecuted_blocks=1 00:07:08.689 00:07:08.689 ' 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.689 --rc genhtml_branch_coverage=1 00:07:08.689 --rc genhtml_function_coverage=1 00:07:08.689 --rc genhtml_legend=1 00:07:08.689 --rc geninfo_all_blocks=1 00:07:08.689 --rc geninfo_unexecuted_blocks=1 00:07:08.689 00:07:08.689 ' 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.689 --rc genhtml_branch_coverage=1 00:07:08.689 --rc genhtml_function_coverage=1 00:07:08.689 --rc genhtml_legend=1 00:07:08.689 --rc geninfo_all_blocks=1 00:07:08.689 --rc geninfo_unexecuted_blocks=1 00:07:08.689 00:07:08.689 ' 00:07:08.689 12:37:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.689 12:37:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.689 12:37:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.689 12:37:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.689 ************************************ 00:07:08.689 START TEST nvmf_target_core 00:07:08.689 ************************************ 00:07:08.689 12:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.952 * Looking for test storage... 00:07:08.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:08.952 12:37:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.952 --rc genhtml_branch_coverage=1 00:07:08.952 --rc genhtml_function_coverage=1 00:07:08.952 --rc genhtml_legend=1 00:07:08.952 --rc geninfo_all_blocks=1 00:07:08.952 --rc geninfo_unexecuted_blocks=1 00:07:08.952 00:07:08.952 ' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.952 --rc genhtml_branch_coverage=1 00:07:08.952 --rc genhtml_function_coverage=1 00:07:08.952 --rc genhtml_legend=1 00:07:08.952 --rc geninfo_all_blocks=1 00:07:08.952 --rc geninfo_unexecuted_blocks=1 00:07:08.952 00:07:08.952 ' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.952 --rc genhtml_branch_coverage=1 00:07:08.952 --rc genhtml_function_coverage=1 00:07:08.952 --rc genhtml_legend=1 00:07:08.952 --rc geninfo_all_blocks=1 00:07:08.952 --rc geninfo_unexecuted_blocks=1 00:07:08.952 00:07:08.952 ' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.952 --rc genhtml_branch_coverage=1 00:07:08.952 --rc genhtml_function_coverage=1 00:07:08.952 --rc genhtml_legend=1 00:07:08.952 --rc geninfo_all_blocks=1 00:07:08.952 --rc geninfo_unexecuted_blocks=1 00:07:08.952 00:07:08.952 ' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.952 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.953 12:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 ************************************ 00:07:09.215 START TEST nvmf_abort 00:07:09.215 ************************************ 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:09.215 * Looking for test storage... 00:07:09.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.215 --rc genhtml_branch_coverage=1 00:07:09.215 --rc genhtml_function_coverage=1 00:07:09.215 --rc genhtml_legend=1 00:07:09.215 --rc geninfo_all_blocks=1 00:07:09.215 --rc geninfo_unexecuted_blocks=1 00:07:09.215 00:07:09.215 ' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.215 --rc genhtml_branch_coverage=1 00:07:09.215 --rc genhtml_function_coverage=1 00:07:09.215 --rc genhtml_legend=1 00:07:09.215 --rc geninfo_all_blocks=1 00:07:09.215 --rc geninfo_unexecuted_blocks=1 00:07:09.215 00:07:09.215 ' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.215 --rc genhtml_branch_coverage=1 00:07:09.215 --rc genhtml_function_coverage=1 00:07:09.215 --rc genhtml_legend=1 00:07:09.215 --rc geninfo_all_blocks=1 00:07:09.215 --rc geninfo_unexecuted_blocks=1 00:07:09.215 00:07:09.215 ' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.215 --rc genhtml_branch_coverage=1 00:07:09.215 --rc genhtml_function_coverage=1 00:07:09.215 --rc genhtml_legend=1 00:07:09.215 --rc geninfo_all_blocks=1 00:07:09.215 --rc geninfo_unexecuted_blocks=1 00:07:09.215 00:07:09.215 ' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.215 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.216 12:37:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:17.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:17.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:17.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:17.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.377 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:07:17.378 00:07:17.378 --- 10.0.0.2 ping statistics --- 00:07:17.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.378 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:07:17.378 00:07:17.378 --- 10.0.0.1 ping statistics --- 00:07:17.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.378 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3166573 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3166573 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3166573 ']' 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.378 12:37:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.378 [2024-11-28 12:37:46.957259] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:17.378 [2024-11-28 12:37:46.957328] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.378 [2024-11-28 12:37:47.102620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.378 [2024-11-28 12:37:47.162973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.378 [2024-11-28 12:37:47.193025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.378 [2024-11-28 12:37:47.193070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.378 [2024-11-28 12:37:47.193078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.378 [2024-11-28 12:37:47.193085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.378 [2024-11-28 12:37:47.193091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.378 [2024-11-28 12:37:47.195090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.378 [2024-11-28 12:37:47.195238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.378 [2024-11-28 12:37:47.195262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 [2024-11-28 12:37:47.837931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 Malloc0 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 Delay0 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 [2024-11-28 12:37:47.921442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.951 12:37:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:18.213 [2024-11-28 12:37:48.171860] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:20.131 Initializing NVMe Controllers 00:07:20.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:20.131 controller IO queue size 128 less than required 00:07:20.131 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:20.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:20.131 Initialization complete. Launching workers. 00:07:20.131 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28522 00:07:20.131 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28583, failed to submit 62 00:07:20.131 success 28526, unsuccessful 57, failed 0 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.131 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.131 rmmod nvme_tcp 00:07:20.131 rmmod nvme_fabrics 00:07:20.394 rmmod nvme_keyring 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3166573 ']' 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3166573 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3166573 ']' 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3166573 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3166573 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3166573' 00:07:20.394 killing process with pid 3166573 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3166573 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3166573 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.394 12:37:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.943 00:07:22.943 real 0m13.468s 00:07:22.943 user 0m13.776s 00:07:22.943 sys 0m6.681s 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.943 ************************************ 00:07:22.943 END TEST nvmf_abort 00:07:22.943 ************************************ 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.943 ************************************ 00:07:22.943 START TEST nvmf_ns_hotplug_stress 00:07:22.943 ************************************ 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.943 * Looking for test storage... 00:07:22.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.943 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.944 --rc genhtml_branch_coverage=1 00:07:22.944 --rc genhtml_function_coverage=1 00:07:22.944 --rc genhtml_legend=1 00:07:22.944 --rc geninfo_all_blocks=1 00:07:22.944 --rc geninfo_unexecuted_blocks=1 00:07:22.944 00:07:22.944 ' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.944 --rc genhtml_branch_coverage=1 00:07:22.944 --rc genhtml_function_coverage=1 00:07:22.944 --rc genhtml_legend=1 00:07:22.944 --rc geninfo_all_blocks=1 00:07:22.944 --rc geninfo_unexecuted_blocks=1 00:07:22.944 00:07:22.944 ' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.944 --rc genhtml_branch_coverage=1 00:07:22.944 --rc genhtml_function_coverage=1 00:07:22.944 --rc genhtml_legend=1 00:07:22.944 --rc geninfo_all_blocks=1 00:07:22.944 --rc geninfo_unexecuted_blocks=1 00:07:22.944 00:07:22.944 ' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.944 --rc genhtml_branch_coverage=1 00:07:22.944 --rc genhtml_function_coverage=1 00:07:22.944 --rc genhtml_legend=1 00:07:22.944 --rc geninfo_all_blocks=1 00:07:22.944 --rc geninfo_unexecuted_blocks=1 00:07:22.944 00:07:22.944 ' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:22.944 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.945 12:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:31.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:31.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.090 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:31.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:31.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:07:31.091 00:07:31.091 --- 10.0.0.2 ping statistics --- 00:07:31.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.091 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:07:31.091 00:07:31.091 --- 10.0.0.1 ping statistics --- 00:07:31.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.091 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3171444 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3171444 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3171444 ']' 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.091 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.092 12:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.092 [2024-11-28 12:38:00.492747] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:31.092 [2024-11-28 12:38:00.492814] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.092 [2024-11-28 12:38:00.637449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.092 [2024-11-28 12:38:00.697679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.092 [2024-11-28 12:38:00.725146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.092 [2024-11-28 12:38:00.725199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.092 [2024-11-28 12:38:00.725207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.092 [2024-11-28 12:38:00.725215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.092 [2024-11-28 12:38:00.725221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.092 [2024-11-28 12:38:00.727222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.092 [2024-11-28 12:38:00.727404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.092 [2024-11-28 12:38:00.727405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:31.354 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.614 [2024-11-28 12:38:01.511459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.614 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:31.875 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.875 [2024-11-28 12:38:01.909559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.875 12:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.135 12:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:32.397 Malloc0 00:07:32.397 12:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.397 Delay0 00:07:32.658 12:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.658 12:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:32.918 NULL1 00:07:32.918 12:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:33.178 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3172132 00:07:33.178 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:33.178 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:33.178 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.439 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.439 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:33.439 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:33.699 true 00:07:33.699 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:33.699 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.959 12:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.959 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:33.959 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:34.219 true 00:07:34.219 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:34.219 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.479 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.740 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:34.740 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:34.740 true 00:07:34.740 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:34.740 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.001 12:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.260 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:35.260 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:35.260 true 00:07:35.260 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:35.260 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.520 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.781 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:35.781 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:35.781 true 00:07:35.781 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:35.781 12:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.041 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.302 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:36.302 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:36.302 true 00:07:36.562 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:36.562 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.562 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.823 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:36.823 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:37.083 true 00:07:37.083 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:37.083 12:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.083 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.344 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:37.344 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:37.604 true 00:07:37.604 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:37.604 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.604 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.864 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:37.864 12:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:38.124 true 00:07:38.124 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:38.124 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.384 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.384 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:38.384 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:38.644 true 00:07:38.644 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:38.644 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.904 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.904 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:38.904 12:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:39.165 true 00:07:39.165 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:39.165 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.425 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.425 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:39.425 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:39.684 true 00:07:39.684 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:39.684 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.944 12:38:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.204 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:40.204 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:40.204 true 00:07:40.204 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:40.204 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.465 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.724 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:40.724 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:40.724 true 00:07:40.724 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:40.724 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.984 12:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.300 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:41.300 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:41.300 true 00:07:41.300 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:41.300 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.654 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.654 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:41.654 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:42.007 true 00:07:42.007 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:42.007 12:38:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.007 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.271 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:42.271 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:42.531 true 00:07:42.531 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:42.531 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.792 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.792 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:42.792 12:38:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:43.052 true 00:07:43.052 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:43.052 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.315 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.315 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:43.315 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:43.575 true 00:07:43.575 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:43.575 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.836 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.098 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:44.098 12:38:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:44.098 true 00:07:44.098 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:44.098 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.359 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.620 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:44.620 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:44.620 true 00:07:44.620 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:44.620 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.881 12:38:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.142 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:45.142 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:45.142 true 00:07:45.142 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:45.142 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.404 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.664 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:45.664 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:45.664 true 00:07:45.664 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:45.664 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.925 12:38:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.186 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:46.186 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:46.187 true 00:07:46.187 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:46.187 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.447 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.707 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:46.707 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:46.968 true 00:07:46.968 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:46.968 12:38:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.968 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.228 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:47.228 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.489 true 00:07:47.489 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:47.489 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.489 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.749 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:47.749 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.010 true 00:07:48.010 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:48.010 12:38:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.271 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.271 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.271 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.531 true 00:07:48.531 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:48.531 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.792 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.792 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:48.792 12:38:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:49.053 true 00:07:49.053 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:49.053 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.314 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.314 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:49.314 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:49.576 true 00:07:49.576 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:49.576 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.836 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.096 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:50.096 12:38:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:50.096 true 00:07:50.096 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:50.096 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.357 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.618 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:50.618 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:50.618 true 00:07:50.618 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:50.618 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.878 12:38:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.139 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:51.139 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:51.139 true 00:07:51.401 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:51.401 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.401 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.661 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:51.661 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:51.922 true 00:07:51.922 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:51.922 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.922 12:38:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.183 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:52.183 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:52.445 true 00:07:52.445 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:52.445 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.445 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.706 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:52.706 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:52.967 true 00:07:52.967 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:52.967 12:38:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.228 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.228 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:53.228 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:53.488 true 00:07:53.488 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:53.488 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.749 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.749 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:53.749 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:54.010 true 00:07:54.010 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:54.010 12:38:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.271 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.271 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:54.271 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:54.532 true 00:07:54.532 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:54.532 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.794 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.055 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:55.055 12:38:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:55.055 true 00:07:55.055 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:55.055 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.315 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.575 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:55.575 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:55.575 true 00:07:55.575 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:55.575 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.836 12:38:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.097 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:56.097 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:56.097 true 00:07:56.097 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:56.097 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.358 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.619 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:56.619 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:56.619 true 00:07:56.879 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:56.879 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.879 12:38:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.140 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:57.140 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:57.401 true 00:07:57.401 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:57.402 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.402 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.662 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:57.662 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:57.923 true 00:07:57.923 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:57.923 12:38:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.184 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.184 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:58.184 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:58.446 true 00:07:58.446 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:58.446 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.707 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.707 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:58.707 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:58.967 true 00:07:58.967 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:58.968 12:38:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.228 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.228 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:59.228 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:59.488 true 00:07:59.488 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:07:59.488 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.748 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.748 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:59.748 12:38:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:00.009 true 00:08:00.009 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:00.009 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.270 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.531 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:00.531 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:00.531 true 00:08:00.531 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:00.531 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.792 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.052 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:01.052 12:38:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:01.052 true 00:08:01.052 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:01.052 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.313 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.573 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:01.573 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:01.573 true 00:08:01.834 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:01.834 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.834 12:38:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.094 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:02.094 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:02.356 true 00:08:02.356 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:02.356 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.356 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.617 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:02.617 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:02.878 true 00:08:02.878 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:02.878 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.878 12:38:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.139 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:08:03.139 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:08:03.399 true 00:08:03.399 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:03.399 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.399 Initializing NVMe Controllers 00:08:03.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.399 Controller IO queue size 128, less than required. 00:08:03.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:03.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:03.399 Initialization complete. Launching workers. 00:08:03.399 ======================================================== 00:08:03.399 Latency(us) 00:08:03.399 Device Information : IOPS MiB/s Average min max 00:08:03.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30325.45 14.81 4220.79 1128.90 11030.17 00:08:03.399 ======================================================== 00:08:03.399 Total : 30325.45 14.81 4220.79 1128.90 11030.17 00:08:03.399 00:08:03.660 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.660 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:08:03.660 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:08:03.920 true 00:08:03.920 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3172132 00:08:03.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3172132) - No such process 00:08:03.920 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3172132 00:08:03.920 12:38:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.182 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.182 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:04.182 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:04.182 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:04.182 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.182 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:04.444 null0 00:08:04.444 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.444 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.444 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:04.706 null1 00:08:04.706 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.706 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.706 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:04.706 null2 00:08:04.706 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.706 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.706 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:04.966 null3 00:08:04.966 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.966 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.966 12:38:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:05.226 null4 00:08:05.226 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.226 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.226 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:05.226 null5 00:08:05.226 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.226 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.226 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:05.486 null6 00:08:05.486 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.486 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.486 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:05.746 null7 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:05.746 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3178691 3178693 3178694 3178696 3178698 3178700 3178702 3178704 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.747 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.007 12:38:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.007 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.269 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.530 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.792 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.053 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.053 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.053 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.053 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.053 12:38:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.053 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.053 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.053 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.053 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.053 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.053 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.054 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.054 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.054 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.054 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.054 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.054 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.315 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.577 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.839 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.100 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.100 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.100 12:38:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.100 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.361 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.622 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.623 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.623 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.623 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.623 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.623 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.883 12:38:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.883 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.883 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.884 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.145 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.406 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.667 rmmod nvme_tcp 00:08:09.667 rmmod nvme_fabrics 00:08:09.667 rmmod nvme_keyring 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3171444 ']' 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3171444 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3171444 ']' 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3171444 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.667 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3171444 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3171444' 00:08:09.927 killing process with pid 3171444 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3171444 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3171444 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.927 12:38:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.475 12:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.475 00:08:12.475 real 0m49.361s 00:08:12.475 user 3m20.278s 00:08:12.475 sys 0m17.383s 00:08:12.475 12:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.475 12:38:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:12.475 ************************************ 00:08:12.475 END TEST nvmf_ns_hotplug_stress 00:08:12.475 ************************************ 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.475 ************************************ 00:08:12.475 START TEST nvmf_delete_subsystem 00:08:12.475 ************************************ 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:12.475 * Looking for test storage... 00:08:12.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.475 --rc genhtml_branch_coverage=1 00:08:12.475 --rc genhtml_function_coverage=1 00:08:12.475 --rc genhtml_legend=1 00:08:12.475 --rc geninfo_all_blocks=1 00:08:12.475 --rc geninfo_unexecuted_blocks=1 00:08:12.475 00:08:12.475 ' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.475 --rc genhtml_branch_coverage=1 00:08:12.475 --rc genhtml_function_coverage=1 00:08:12.475 --rc genhtml_legend=1 00:08:12.475 --rc geninfo_all_blocks=1 00:08:12.475 --rc geninfo_unexecuted_blocks=1 00:08:12.475 00:08:12.475 ' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.475 --rc genhtml_branch_coverage=1 00:08:12.475 --rc genhtml_function_coverage=1 00:08:12.475 --rc genhtml_legend=1 00:08:12.475 --rc geninfo_all_blocks=1 00:08:12.475 --rc geninfo_unexecuted_blocks=1 00:08:12.475 00:08:12.475 ' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.475 --rc genhtml_branch_coverage=1 00:08:12.475 --rc genhtml_function_coverage=1 00:08:12.475 --rc genhtml_legend=1 00:08:12.475 --rc geninfo_all_blocks=1 00:08:12.475 --rc geninfo_unexecuted_blocks=1 00:08:12.475 00:08:12.475 ' 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.475 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.476 12:38:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:20.620 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:20.620 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:20.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:20.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:20.620 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:20.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:08:20.621 00:08:20.621 --- 10.0.0.2 ping statistics --- 00:08:20.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.621 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:08:20.621 00:08:20.621 --- 10.0.0.1 ping statistics --- 00:08:20.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.621 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3183957 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3183957 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3183957 ']' 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.621 12:38:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 [2024-11-28 12:38:49.863845] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:20.621 [2024-11-28 12:38:49.863915] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.621 [2024-11-28 12:38:50.007520] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.621 [2024-11-28 12:38:50.065373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.621 [2024-11-28 12:38:50.094474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.621 [2024-11-28 12:38:50.094523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.621 [2024-11-28 12:38:50.094531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.621 [2024-11-28 12:38:50.094539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.621 [2024-11-28 12:38:50.094545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.621 [2024-11-28 12:38:50.096208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.621 [2024-11-28 12:38:50.096214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 [2024-11-28 12:38:50.741168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 [2024-11-28 12:38:50.765430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 NULL1 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 Delay0 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3184228 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:20.882 12:38:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:20.882 [2024-11-28 12:38:51.002219] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.798 12:38:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.798 12:38:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.798 12:38:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 [2024-11-28 12:38:53.084794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcc60 is same with the state(6) to be set 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 [2024-11-28 12:38:53.085457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8100 is same with the state(6) to be set 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 Write completed with error (sct=0, sc=8) 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.059 starting I/O failed: -6 00:08:23.059 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 starting I/O failed: -6 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 [2024-11-28 12:38:53.087590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f150800d350 is same with the state(6) to be set 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Read completed with error (sct=0, sc=8) 00:08:23.060 Write completed with error (sct=0, sc=8) 00:08:24.007 [2024-11-28 12:38:54.059461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edfbe0 is same with the state(6) to be set 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 [2024-11-28 12:38:54.085570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed82e0 is same with the state(6) to be set 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 [2024-11-28 12:38:54.086424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc1b0 is same with the state(6) to be set 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 [2024-11-28 12:38:54.087277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f150800d020 is same with the state(6) to be set 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Write completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.007 Read completed with error (sct=0, sc=8) 00:08:24.008 Read completed with error (sct=0, sc=8) 00:08:24.008 Read completed with error (sct=0, sc=8) 00:08:24.008 Write completed with error (sct=0, sc=8) 00:08:24.008 Read completed with error (sct=0, sc=8) 00:08:24.008 [2024-11-28 12:38:54.087362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f150800d800 is same with the state(6) to be set 00:08:24.008 Initializing NVMe Controllers 00:08:24.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:24.008 Controller IO queue size 128, less than required. 00:08:24.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:24.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:24.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:24.008 Initialization complete. Launching workers. 00:08:24.008 ======================================================== 00:08:24.008 Latency(us) 00:08:24.008 Device Information : IOPS MiB/s Average min max 00:08:24.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.09 0.08 910083.70 675.95 1009135.88 00:08:24.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.59 0.08 916415.93 274.60 2001626.26 00:08:24.008 ======================================================== 00:08:24.008 Total : 325.68 0.16 913244.98 274.60 2001626.26 00:08:24.008 00:08:24.008 [2024-11-28 12:38:54.087862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edfbe0 (9): Bad file descriptor 00:08:24.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:24.008 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.008 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:24.008 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3184228 00:08:24.008 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3184228 00:08:24.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3184228) - No such process 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3184228 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3184228 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3184228 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.673 [2024-11-28 12:38:54.622210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3184918 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:24.673 12:38:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.959 [2024-11-28 12:38:54.827097] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:25.224 12:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.224 12:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:25.224 12:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.794 12:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.794 12:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:25.794 12:38:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.092 12:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.092 12:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:26.092 12:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.663 12:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.663 12:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:26.663 12:38:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.235 12:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.235 12:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:27.235 12:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.808 12:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.808 12:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:27.808 12:38:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.069 Initializing NVMe Controllers 00:08:28.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.069 Controller IO queue size 128, less than required. 00:08:28.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:28.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:28.069 Initialization complete. Launching workers. 00:08:28.069 ======================================================== 00:08:28.069 Latency(us) 00:08:28.069 Device Information : IOPS MiB/s Average min max 00:08:28.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001713.19 1000000.24 1004795.81 00:08:28.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002620.03 1000117.05 1007966.62 00:08:28.069 ======================================================== 00:08:28.069 Total : 256.00 0.12 1002166.61 1000000.24 1007966.62 00:08:28.069 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3184918 00:08:28.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3184918) - No such process 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3184918 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.069 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.069 rmmod nvme_tcp 00:08:28.330 rmmod nvme_fabrics 00:08:28.330 rmmod nvme_keyring 00:08:28.330 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3183957 ']' 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3183957 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3183957 ']' 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3183957 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3183957 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3183957' 00:08:28.331 killing process with pid 3183957 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3183957 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3183957 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.331 12:38:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.876 00:08:30.876 real 0m18.407s 00:08:30.876 user 0m30.644s 00:08:30.876 sys 0m6.820s 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.876 ************************************ 00:08:30.876 END TEST nvmf_delete_subsystem 00:08:30.876 ************************************ 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.876 ************************************ 00:08:30.876 START TEST nvmf_host_management 00:08:30.876 ************************************ 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.876 * Looking for test storage... 00:08:30.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.876 --rc genhtml_branch_coverage=1 00:08:30.876 --rc genhtml_function_coverage=1 00:08:30.876 --rc genhtml_legend=1 00:08:30.876 --rc geninfo_all_blocks=1 00:08:30.876 --rc geninfo_unexecuted_blocks=1 00:08:30.876 00:08:30.876 ' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.876 --rc genhtml_branch_coverage=1 00:08:30.876 --rc genhtml_function_coverage=1 00:08:30.876 --rc genhtml_legend=1 00:08:30.876 --rc geninfo_all_blocks=1 00:08:30.876 --rc geninfo_unexecuted_blocks=1 00:08:30.876 00:08:30.876 ' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.876 --rc genhtml_branch_coverage=1 00:08:30.876 --rc genhtml_function_coverage=1 00:08:30.876 --rc genhtml_legend=1 00:08:30.876 --rc geninfo_all_blocks=1 00:08:30.876 --rc geninfo_unexecuted_blocks=1 00:08:30.876 00:08:30.876 ' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.876 --rc genhtml_branch_coverage=1 00:08:30.876 --rc genhtml_function_coverage=1 00:08:30.876 --rc genhtml_legend=1 00:08:30.876 --rc geninfo_all_blocks=1 00:08:30.876 --rc geninfo_unexecuted_blocks=1 00:08:30.876 00:08:30.876 ' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.876 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.877 12:39:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:39.026 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:39.026 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:39.026 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:39.026 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.026 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:08:39.027 00:08:39.027 --- 10.0.0.2 ping statistics --- 00:08:39.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.027 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:08:39.027 00:08:39.027 --- 10.0.0.1 ping statistics --- 00:08:39.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.027 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3190046 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3190046 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3190046 ']' 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.027 12:39:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.027 [2024-11-28 12:39:08.442048] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:39.027 [2024-11-28 12:39:08.442115] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.027 [2024-11-28 12:39:08.585979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.027 [2024-11-28 12:39:08.643973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.027 [2024-11-28 12:39:08.674698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.027 [2024-11-28 12:39:08.674747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.027 [2024-11-28 12:39:08.674756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.027 [2024-11-28 12:39:08.674763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.027 [2024-11-28 12:39:08.674769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.027 [2024-11-28 12:39:08.677139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.027 [2024-11-28 12:39:08.677302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.027 [2024-11-28 12:39:08.677577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.027 [2024-11-28 12:39:08.677578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.288 [2024-11-28 12:39:09.312239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:39.288 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:39.289 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.289 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.289 Malloc0 00:08:39.289 [2024-11-28 12:39:09.398595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.289 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.289 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.289 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.289 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3190621 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3190621 /var/tmp/bdevperf.sock 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3190621 ']' 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.550 { 00:08:39.550 "params": { 00:08:39.550 "name": "Nvme$subsystem", 00:08:39.550 "trtype": "$TEST_TRANSPORT", 00:08:39.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.550 "adrfam": "ipv4", 00:08:39.550 "trsvcid": "$NVMF_PORT", 00:08:39.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.550 "hdgst": ${hdgst:-false}, 00:08:39.550 "ddgst": ${ddgst:-false} 00:08:39.550 }, 00:08:39.550 "method": "bdev_nvme_attach_controller" 00:08:39.550 } 00:08:39.550 EOF 00:08:39.550 )") 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.550 12:39:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.550 "params": { 00:08:39.550 "name": "Nvme0", 00:08:39.550 "trtype": "tcp", 00:08:39.550 "traddr": "10.0.0.2", 00:08:39.550 "adrfam": "ipv4", 00:08:39.550 "trsvcid": "4420", 00:08:39.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.550 "hdgst": false, 00:08:39.550 "ddgst": false 00:08:39.550 }, 00:08:39.550 "method": "bdev_nvme_attach_controller" 00:08:39.550 }' 00:08:39.550 [2024-11-28 12:39:09.509546] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:39.550 [2024-11-28 12:39:09.509622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190621 ] 00:08:39.550 [2024-11-28 12:39:09.647066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.811 [2024-11-28 12:39:09.707048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.811 [2024-11-28 12:39:09.735212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.072 Running I/O for 10 seconds... 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.335 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.335 [2024-11-28 12:39:10.424248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1630 is same with the state(6) to be set 00:08:40.335 [2024-11-28 12:39:10.424450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.335 [2024-11-28 12:39:10.424514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.335 [2024-11-28 12:39:10.424535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.335 [2024-11-28 12:39:10.424545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.335 [2024-11-28 12:39:10.424556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.335 [2024-11-28 12:39:10.424564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.335 [2024-11-28 12:39:10.424574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.335 [2024-11-28 12:39:10.424582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.335 [2024-11-28 12:39:10.424591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.335 [2024-11-28 12:39:10.424600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.335 [2024-11-28 12:39:10.424610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.424982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.424990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.336 [2024-11-28 12:39:10.425178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.336 [2024-11-28 12:39:10.425185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.337 [2024-11-28 12:39:10.425664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.337 [2024-11-28 12:39:10.425822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.337 [2024-11-28 12:39:10.425841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.337 [2024-11-28 12:39:10.425858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.337 [2024-11-28 12:39:10.425866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.338 [2024-11-28 12:39:10.425873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.338 [2024-11-28 12:39:10.425882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf6bd0 is same with the state(6) to be set 00:08:40.338 [2024-11-28 12:39:10.427099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:40.338 task offset: 71680 on job bdev=Nvme0n1 fails 00:08:40.338 00:08:40.338 Latency(us) 00:08:40.338 [2024-11-28T11:39:10.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.338 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.338 Job: Nvme0n1 ended in about 0.37 seconds with error 00:08:40.338 Verification LBA range: start 0x0 length 0x400 00:08:40.338 Nvme0n1 : 0.37 1376.78 86.05 172.10 0.00 39947.46 2052.79 37442.89 00:08:40.338 [2024-11-28T11:39:10.465Z] =================================================================================================================== 00:08:40.338 [2024-11-28T11:39:10.465Z] Total : 1376.78 86.05 172.10 0.00 39947.46 2052.79 37442.89 00:08:40.338 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.338 [2024-11-28 12:39:10.429366] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.338 [2024-11-28 12:39:10.429407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf6bd0 (9): Bad file descriptor 00:08:40.338 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.338 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.338 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.338 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.338 12:39:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:40.338 [2024-11-28 12:39:10.442470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3190621 00:08:41.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3190621) - No such process 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.724 { 00:08:41.724 "params": { 00:08:41.724 "name": "Nvme$subsystem", 00:08:41.724 "trtype": "$TEST_TRANSPORT", 00:08:41.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.724 "adrfam": "ipv4", 00:08:41.724 "trsvcid": "$NVMF_PORT", 00:08:41.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.724 "hdgst": ${hdgst:-false}, 00:08:41.724 "ddgst": ${ddgst:-false} 00:08:41.724 }, 00:08:41.724 "method": "bdev_nvme_attach_controller" 00:08:41.724 } 00:08:41.724 EOF 00:08:41.724 )") 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:41.724 12:39:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.724 "params": { 00:08:41.724 "name": "Nvme0", 00:08:41.724 "trtype": "tcp", 00:08:41.724 "traddr": "10.0.0.2", 00:08:41.724 "adrfam": "ipv4", 00:08:41.724 "trsvcid": "4420", 00:08:41.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.724 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.724 "hdgst": false, 00:08:41.724 "ddgst": false 00:08:41.724 }, 00:08:41.724 "method": "bdev_nvme_attach_controller" 00:08:41.724 }' 00:08:41.724 [2024-11-28 12:39:11.502805] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:41.724 [2024-11-28 12:39:11.502862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191228 ] 00:08:41.724 [2024-11-28 12:39:11.635922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.724 [2024-11-28 12:39:11.695898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.724 [2024-11-28 12:39:11.712794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.984 Running I/O for 1 seconds... 00:08:42.926 1662.00 IOPS, 103.88 MiB/s 00:08:42.926 Latency(us) 00:08:42.926 [2024-11-28T11:39:13.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.926 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.926 Verification LBA range: start 0x0 length 0x400 00:08:42.926 Nvme0n1 : 1.02 1689.42 105.59 0.00 0.00 37212.87 6048.89 32844.64 00:08:42.926 [2024-11-28T11:39:13.053Z] =================================================================================================================== 00:08:42.926 [2024-11-28T11:39:13.053Z] Total : 1689.42 105.59 0.00 0.00 37212.87 6048.89 32844.64 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:42.926 12:39:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.926 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:42.926 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.926 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.926 rmmod nvme_tcp 00:08:42.926 rmmod nvme_fabrics 00:08:42.926 rmmod nvme_keyring 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3190046 ']' 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3190046 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3190046 ']' 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3190046 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190046 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190046' 00:08:43.187 killing process with pid 3190046 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3190046 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3190046 00:08:43.187 [2024-11-28 12:39:13.220171] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.187 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.188 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.188 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.188 12:39:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:45.747 00:08:45.747 real 0m14.745s 00:08:45.747 user 0m22.813s 00:08:45.747 sys 0m6.788s 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.747 ************************************ 00:08:45.747 END TEST nvmf_host_management 00:08:45.747 ************************************ 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.747 ************************************ 00:08:45.747 START TEST nvmf_lvol 00:08:45.747 ************************************ 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:45.747 * Looking for test storage... 00:08:45.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.747 --rc genhtml_branch_coverage=1 00:08:45.747 --rc genhtml_function_coverage=1 00:08:45.747 --rc genhtml_legend=1 00:08:45.747 --rc geninfo_all_blocks=1 00:08:45.747 --rc geninfo_unexecuted_blocks=1 00:08:45.747 00:08:45.747 ' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.747 --rc genhtml_branch_coverage=1 00:08:45.747 --rc genhtml_function_coverage=1 00:08:45.747 --rc genhtml_legend=1 00:08:45.747 --rc geninfo_all_blocks=1 00:08:45.747 --rc geninfo_unexecuted_blocks=1 00:08:45.747 00:08:45.747 ' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.747 --rc genhtml_branch_coverage=1 00:08:45.747 --rc genhtml_function_coverage=1 00:08:45.747 --rc genhtml_legend=1 00:08:45.747 --rc geninfo_all_blocks=1 00:08:45.747 --rc geninfo_unexecuted_blocks=1 00:08:45.747 00:08:45.747 ' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.747 --rc genhtml_branch_coverage=1 00:08:45.747 --rc genhtml_function_coverage=1 00:08:45.747 --rc genhtml_legend=1 00:08:45.747 --rc geninfo_all_blocks=1 00:08:45.747 --rc geninfo_unexecuted_blocks=1 00:08:45.747 00:08:45.747 ' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.747 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.748 12:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:53.897 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:53.897 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.897 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:53.898 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:53.898 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.898 12:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:08:53.898 00:08:53.898 --- 10.0.0.2 ping statistics --- 00:08:53.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.898 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:08:53.898 00:08:53.898 --- 10.0.0.1 ping statistics --- 00:08:53.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.898 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3195819 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3195819 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3195819 ']' 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.898 12:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.898 [2024-11-28 12:39:23.287390] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:53.898 [2024-11-28 12:39:23.287460] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.898 [2024-11-28 12:39:23.431693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:53.898 [2024-11-28 12:39:23.491126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.898 [2024-11-28 12:39:23.519068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.898 [2024-11-28 12:39:23.519110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.898 [2024-11-28 12:39:23.519119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.898 [2024-11-28 12:39:23.519126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.898 [2024-11-28 12:39:23.519133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.898 [2024-11-28 12:39:23.520925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.898 [2024-11-28 12:39:23.521084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.898 [2024-11-28 12:39:23.521085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.160 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.422 [2024-11-28 12:39:24.327978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.422 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.684 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:54.684 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.945 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:54.946 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:54.946 12:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:55.206 12:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4429c311-88f5-41a5-9852-bad51a52377a 00:08:55.206 12:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4429c311-88f5-41a5-9852-bad51a52377a lvol 20 00:08:55.467 12:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c609690d-f230-470a-bc66-1ff751cbc8ba 00:08:55.467 12:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.728 12:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c609690d-f230-470a-bc66-1ff751cbc8ba 00:08:55.728 12:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:55.989 [2024-11-28 12:39:25.979882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.989 12:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.251 12:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3196311 00:08:56.251 12:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:56.251 12:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:57.195 12:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c609690d-f230-470a-bc66-1ff751cbc8ba MY_SNAPSHOT 00:08:57.456 12:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=396da224-dfdc-47fd-a0cb-a1652b94cdb6 00:08:57.456 12:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c609690d-f230-470a-bc66-1ff751cbc8ba 30 00:08:57.716 12:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 396da224-dfdc-47fd-a0cb-a1652b94cdb6 MY_CLONE 00:08:57.977 12:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=59638cdd-3f64-4ac0-a807-2715b3944f54 00:08:57.977 12:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 59638cdd-3f64-4ac0-a807-2715b3944f54 00:08:58.238 12:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3196311 00:09:08.242 Initializing NVMe Controllers 00:09:08.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.242 Controller IO queue size 128, less than required. 00:09:08.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:08.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:08.243 Initialization complete. Launching workers. 00:09:08.243 ======================================================== 00:09:08.243 Latency(us) 00:09:08.243 Device Information : IOPS MiB/s Average min max 00:09:08.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16011.80 62.55 7996.26 1866.47 42121.24 00:09:08.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16407.20 64.09 7801.36 664.16 59978.17 00:09:08.243 ======================================================== 00:09:08.243 Total : 32419.00 126.64 7897.63 664.16 59978.17 00:09:08.243 00:09:08.243 12:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.243 12:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c609690d-f230-470a-bc66-1ff751cbc8ba 00:09:08.243 12:39:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4429c311-88f5-41a5-9852-bad51a52377a 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.243 rmmod nvme_tcp 00:09:08.243 rmmod nvme_fabrics 00:09:08.243 rmmod nvme_keyring 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3195819 ']' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3195819 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3195819 ']' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3195819 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195819 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195819' 00:09:08.243 killing process with pid 3195819 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3195819 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3195819 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.243 12:39:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.630 00:09:09.630 real 0m24.039s 00:09:09.630 user 1m4.566s 00:09:09.630 sys 0m8.649s 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.630 ************************************ 00:09:09.630 END TEST nvmf_lvol 00:09:09.630 ************************************ 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.630 ************************************ 00:09:09.630 START TEST nvmf_lvs_grow 00:09:09.630 ************************************ 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.630 * Looking for test storage... 00:09:09.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.630 --rc genhtml_branch_coverage=1 00:09:09.630 --rc genhtml_function_coverage=1 00:09:09.630 --rc genhtml_legend=1 00:09:09.630 --rc geninfo_all_blocks=1 00:09:09.630 --rc geninfo_unexecuted_blocks=1 00:09:09.630 00:09:09.630 ' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.630 --rc genhtml_branch_coverage=1 00:09:09.630 --rc genhtml_function_coverage=1 00:09:09.630 --rc genhtml_legend=1 00:09:09.630 --rc geninfo_all_blocks=1 00:09:09.630 --rc geninfo_unexecuted_blocks=1 00:09:09.630 00:09:09.630 ' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.630 --rc genhtml_branch_coverage=1 00:09:09.630 --rc genhtml_function_coverage=1 00:09:09.630 --rc genhtml_legend=1 00:09:09.630 --rc geninfo_all_blocks=1 00:09:09.630 --rc geninfo_unexecuted_blocks=1 00:09:09.630 00:09:09.630 ' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.630 --rc genhtml_branch_coverage=1 00:09:09.630 --rc genhtml_function_coverage=1 00:09:09.630 --rc genhtml_legend=1 00:09:09.630 --rc geninfo_all_blocks=1 00:09:09.630 --rc geninfo_unexecuted_blocks=1 00:09:09.630 00:09:09.630 ' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.630 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.631 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.631 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.631 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:09.631 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.893 12:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:18.043 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:18.043 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:18.043 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:18.043 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.043 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.044 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.044 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.044 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.044 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.044 12:39:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:09:18.044 00:09:18.044 --- 10.0.0.2 ping statistics --- 00:09:18.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.044 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:09:18.044 00:09:18.044 --- 10.0.0.1 ping statistics --- 00:09:18.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.044 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3202830 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3202830 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3202830 ']' 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.044 12:39:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.044 [2024-11-28 12:39:47.361958] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:18.044 [2024-11-28 12:39:47.362027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.044 [2024-11-28 12:39:47.506005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:18.044 [2024-11-28 12:39:47.565063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.044 [2024-11-28 12:39:47.592247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.044 [2024-11-28 12:39:47.592293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.044 [2024-11-28 12:39:47.592301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.044 [2024-11-28 12:39:47.592308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.044 [2024-11-28 12:39:47.592314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.044 [2024-11-28 12:39:47.593024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.307 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:18.307 [2024-11-28 12:39:48.399963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.568 ************************************ 00:09:18.568 START TEST lvs_grow_clean 00:09:18.568 ************************************ 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.568 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.829 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:18.829 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:18.829 12:39:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:19.089 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:19.089 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:19.089 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e lvol 150 00:09:19.350 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d5c36dd0-eae9-4645-a9fe-54fa75b2ceca 00:09:19.350 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.350 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:19.350 [2024-11-28 12:39:49.415396] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:19.350 [2024-11-28 12:39:49.415469] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:19.350 true 00:09:19.350 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:19.350 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.610 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:19.610 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:19.871 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5c36dd0-eae9-4645-a9fe-54fa75b2ceca 00:09:19.871 12:39:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.131 [2024-11-28 12:39:50.140109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.131 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3203404 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3203404 /var/tmp/bdevperf.sock 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3203404 ']' 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.392 12:39:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:20.392 [2024-11-28 12:39:50.408817] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:20.392 [2024-11-28 12:39:50.408893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203404 ] 00:09:20.653 [2024-11-28 12:39:50.545141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.653 [2024-11-28 12:39:50.605075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.653 [2024-11-28 12:39:50.632832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.226 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.226 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:21.226 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:21.487 Nvme0n1 00:09:21.487 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:21.749 [ 00:09:21.749 { 00:09:21.749 "name": "Nvme0n1", 00:09:21.749 "aliases": [ 00:09:21.749 "d5c36dd0-eae9-4645-a9fe-54fa75b2ceca" 00:09:21.749 ], 00:09:21.749 "product_name": "NVMe disk", 00:09:21.749 "block_size": 4096, 00:09:21.749 "num_blocks": 38912, 00:09:21.749 "uuid": "d5c36dd0-eae9-4645-a9fe-54fa75b2ceca", 00:09:21.749 "numa_id": 0, 00:09:21.749 "assigned_rate_limits": { 00:09:21.749 "rw_ios_per_sec": 0, 00:09:21.749 "rw_mbytes_per_sec": 0, 00:09:21.749 "r_mbytes_per_sec": 0, 00:09:21.749 "w_mbytes_per_sec": 0 00:09:21.749 }, 00:09:21.749 "claimed": false, 00:09:21.749 "zoned": false, 00:09:21.749 "supported_io_types": { 00:09:21.749 "read": true, 00:09:21.749 "write": true, 00:09:21.749 "unmap": true, 00:09:21.749 "flush": true, 00:09:21.749 "reset": true, 00:09:21.749 "nvme_admin": true, 00:09:21.749 "nvme_io": true, 00:09:21.749 "nvme_io_md": false, 00:09:21.749 "write_zeroes": true, 00:09:21.749 "zcopy": false, 00:09:21.749 "get_zone_info": false, 00:09:21.749 "zone_management": false, 00:09:21.749 "zone_append": false, 00:09:21.749 "compare": true, 00:09:21.749 "compare_and_write": true, 00:09:21.749 "abort": true, 00:09:21.749 "seek_hole": false, 00:09:21.749 "seek_data": false, 00:09:21.749 "copy": true, 00:09:21.749 "nvme_iov_md": false 00:09:21.749 }, 00:09:21.749 "memory_domains": [ 00:09:21.749 { 00:09:21.749 "dma_device_id": "system", 00:09:21.749 "dma_device_type": 1 00:09:21.749 } 00:09:21.749 ], 00:09:21.749 "driver_specific": { 00:09:21.749 "nvme": [ 00:09:21.749 { 00:09:21.749 "trid": { 00:09:21.749 "trtype": "TCP", 00:09:21.749 "adrfam": "IPv4", 00:09:21.749 "traddr": "10.0.0.2", 00:09:21.749 "trsvcid": "4420", 00:09:21.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:21.749 }, 00:09:21.749 "ctrlr_data": { 00:09:21.749 "cntlid": 1, 00:09:21.749 "vendor_id": "0x8086", 00:09:21.749 "model_number": "SPDK bdev Controller", 00:09:21.749 "serial_number": "SPDK0", 00:09:21.749 "firmware_revision": "25.01", 00:09:21.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.749 "oacs": { 00:09:21.749 "security": 0, 00:09:21.749 "format": 0, 00:09:21.749 "firmware": 0, 00:09:21.749 "ns_manage": 0 00:09:21.750 }, 00:09:21.750 "multi_ctrlr": true, 00:09:21.750 "ana_reporting": false 00:09:21.750 }, 00:09:21.750 "vs": { 00:09:21.750 "nvme_version": "1.3" 00:09:21.750 }, 00:09:21.750 "ns_data": { 00:09:21.750 "id": 1, 00:09:21.750 "can_share": true 00:09:21.750 } 00:09:21.750 } 00:09:21.750 ], 00:09:21.750 "mp_policy": "active_passive" 00:09:21.750 } 00:09:21.750 } 00:09:21.750 ] 00:09:21.750 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3203736 00:09:21.750 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:21.750 12:39:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.750 Running I/O for 10 seconds... 00:09:23.132 Latency(us) 00:09:23.132 [2024-11-28T11:39:53.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.132 Nvme0n1 : 1.00 24916.00 97.33 0.00 0.00 0.00 0.00 0.00 00:09:23.132 [2024-11-28T11:39:53.259Z] =================================================================================================================== 00:09:23.132 [2024-11-28T11:39:53.259Z] Total : 24916.00 97.33 0.00 0.00 0.00 0.00 0.00 00:09:23.132 00:09:23.703 12:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:23.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.963 Nvme0n1 : 2.00 25042.00 97.82 0.00 0.00 0.00 0.00 0.00 00:09:23.963 [2024-11-28T11:39:54.090Z] =================================================================================================================== 00:09:23.963 [2024-11-28T11:39:54.090Z] Total : 25042.00 97.82 0.00 0.00 0.00 0.00 0.00 00:09:23.963 00:09:23.963 true 00:09:23.963 12:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.963 12:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:24.224 12:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:24.224 12:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:24.224 12:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3203736 00:09:24.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.798 Nvme0n1 : 3.00 25084.00 97.98 0.00 0.00 0.00 0.00 0.00 00:09:24.798 [2024-11-28T11:39:54.925Z] =================================================================================================================== 00:09:24.798 [2024-11-28T11:39:54.925Z] Total : 25084.00 97.98 0.00 0.00 0.00 0.00 0.00 00:09:24.798 00:09:25.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.748 Nvme0n1 : 4.00 25140.25 98.20 0.00 0.00 0.00 0.00 0.00 00:09:25.748 [2024-11-28T11:39:55.875Z] =================================================================================================================== 00:09:25.748 [2024-11-28T11:39:55.875Z] Total : 25140.25 98.20 0.00 0.00 0.00 0.00 0.00 00:09:25.748 00:09:27.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.139 Nvme0n1 : 5.00 25176.80 98.35 0.00 0.00 0.00 0.00 0.00 00:09:27.139 [2024-11-28T11:39:57.266Z] =================================================================================================================== 00:09:27.139 [2024-11-28T11:39:57.266Z] Total : 25176.80 98.35 0.00 0.00 0.00 0.00 0.00 00:09:27.139 00:09:28.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.082 Nvme0n1 : 6.00 25204.67 98.46 0.00 0.00 0.00 0.00 0.00 00:09:28.082 [2024-11-28T11:39:58.209Z] =================================================================================================================== 00:09:28.082 [2024-11-28T11:39:58.209Z] Total : 25204.67 98.46 0.00 0.00 0.00 0.00 0.00 00:09:28.082 00:09:29.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.047 Nvme0n1 : 7.00 25224.57 98.53 0.00 0.00 0.00 0.00 0.00 00:09:29.047 [2024-11-28T11:39:59.174Z] =================================================================================================================== 00:09:29.047 [2024-11-28T11:39:59.174Z] Total : 25224.57 98.53 0.00 0.00 0.00 0.00 0.00 00:09:29.047 00:09:30.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.099 Nvme0n1 : 8.00 25233.50 98.57 0.00 0.00 0.00 0.00 0.00 00:09:30.099 [2024-11-28T11:40:00.226Z] =================================================================================================================== 00:09:30.099 [2024-11-28T11:40:00.226Z] Total : 25233.50 98.57 0.00 0.00 0.00 0.00 0.00 00:09:30.099 00:09:31.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.041 Nvme0n1 : 9.00 25243.56 98.61 0.00 0.00 0.00 0.00 0.00 00:09:31.041 [2024-11-28T11:40:01.168Z] =================================================================================================================== 00:09:31.041 [2024-11-28T11:40:01.168Z] Total : 25243.56 98.61 0.00 0.00 0.00 0.00 0.00 00:09:31.041 00:09:31.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.982 Nvme0n1 : 10.00 25253.50 98.65 0.00 0.00 0.00 0.00 0.00 00:09:31.982 [2024-11-28T11:40:02.109Z] =================================================================================================================== 00:09:31.982 [2024-11-28T11:40:02.109Z] Total : 25253.50 98.65 0.00 0.00 0.00 0.00 0.00 00:09:31.982 00:09:31.982 00:09:31.982 Latency(us) 00:09:31.982 [2024-11-28T11:40:02.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.982 Nvme0n1 : 10.00 25254.65 98.65 0.00 0.00 5064.69 2080.16 8758.57 00:09:31.982 [2024-11-28T11:40:02.109Z] =================================================================================================================== 00:09:31.982 [2024-11-28T11:40:02.109Z] Total : 25254.65 98.65 0.00 0.00 5064.69 2080.16 8758.57 00:09:31.982 { 00:09:31.982 "results": [ 00:09:31.982 { 00:09:31.982 "job": "Nvme0n1", 00:09:31.982 "core_mask": "0x2", 00:09:31.982 "workload": "randwrite", 00:09:31.982 "status": "finished", 00:09:31.982 "queue_depth": 128, 00:09:31.982 "io_size": 4096, 00:09:31.982 "runtime": 10.004614, 00:09:31.982 "iops": 25254.647505640896, 00:09:31.982 "mibps": 98.65096681890975, 00:09:31.982 "io_failed": 0, 00:09:31.982 "io_timeout": 0, 00:09:31.982 "avg_latency_us": 5064.688108773861, 00:09:31.982 "min_latency_us": 2080.1603742064817, 00:09:31.982 "max_latency_us": 8758.56999665887 00:09:31.982 } 00:09:31.982 ], 00:09:31.982 "core_count": 1 00:09:31.982 } 00:09:31.982 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3203404 00:09:31.982 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3203404 ']' 00:09:31.982 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3203404 00:09:31.982 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3203404 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3203404' 00:09:31.983 killing process with pid 3203404 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3203404 00:09:31.983 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.983 00:09:31.983 Latency(us) 00:09:31.983 [2024-11-28T11:40:02.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.983 [2024-11-28T11:40:02.110Z] =================================================================================================================== 00:09:31.983 [2024-11-28T11:40:02.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.983 12:40:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3203404 00:09:31.983 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.242 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.503 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:32.503 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.503 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.503 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:32.503 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.764 [2024-11-28 12:40:02.706875] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:32.764 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:33.025 request: 00:09:33.025 { 00:09:33.025 "uuid": "8b541e70-823f-406a-a8c4-3d0ae00ad31e", 00:09:33.025 "method": "bdev_lvol_get_lvstores", 00:09:33.025 "req_id": 1 00:09:33.025 } 00:09:33.025 Got JSON-RPC error response 00:09:33.025 response: 00:09:33.025 { 00:09:33.025 "code": -19, 00:09:33.025 "message": "No such device" 00:09:33.025 } 00:09:33.025 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:33.025 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.025 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.025 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.025 12:40:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.025 aio_bdev 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d5c36dd0-eae9-4645-a9fe-54fa75b2ceca 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d5c36dd0-eae9-4645-a9fe-54fa75b2ceca 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.025 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.286 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5c36dd0-eae9-4645-a9fe-54fa75b2ceca -t 2000 00:09:33.547 [ 00:09:33.547 { 00:09:33.547 "name": "d5c36dd0-eae9-4645-a9fe-54fa75b2ceca", 00:09:33.547 "aliases": [ 00:09:33.547 "lvs/lvol" 00:09:33.547 ], 00:09:33.547 "product_name": "Logical Volume", 00:09:33.547 "block_size": 4096, 00:09:33.547 "num_blocks": 38912, 00:09:33.547 "uuid": "d5c36dd0-eae9-4645-a9fe-54fa75b2ceca", 00:09:33.547 "assigned_rate_limits": { 00:09:33.547 "rw_ios_per_sec": 0, 00:09:33.547 "rw_mbytes_per_sec": 0, 00:09:33.547 "r_mbytes_per_sec": 0, 00:09:33.547 "w_mbytes_per_sec": 0 00:09:33.547 }, 00:09:33.547 "claimed": false, 00:09:33.547 "zoned": false, 00:09:33.547 "supported_io_types": { 00:09:33.547 "read": true, 00:09:33.547 "write": true, 00:09:33.547 "unmap": true, 00:09:33.547 "flush": false, 00:09:33.547 "reset": true, 00:09:33.547 "nvme_admin": false, 00:09:33.547 "nvme_io": false, 00:09:33.547 "nvme_io_md": false, 00:09:33.547 "write_zeroes": true, 00:09:33.547 "zcopy": false, 00:09:33.547 "get_zone_info": false, 00:09:33.547 "zone_management": false, 00:09:33.547 "zone_append": false, 00:09:33.547 "compare": false, 00:09:33.547 "compare_and_write": false, 00:09:33.547 "abort": false, 00:09:33.547 "seek_hole": true, 00:09:33.547 "seek_data": true, 00:09:33.547 "copy": false, 00:09:33.547 "nvme_iov_md": false 00:09:33.547 }, 00:09:33.547 "driver_specific": { 00:09:33.547 "lvol": { 00:09:33.547 "lvol_store_uuid": "8b541e70-823f-406a-a8c4-3d0ae00ad31e", 00:09:33.547 "base_bdev": "aio_bdev", 00:09:33.547 "thin_provision": false, 00:09:33.547 "num_allocated_clusters": 38, 00:09:33.547 "snapshot": false, 00:09:33.547 "clone": false, 00:09:33.547 "esnap_clone": false 00:09:33.547 } 00:09:33.547 } 00:09:33.547 } 00:09:33.547 ] 00:09:33.547 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:33.547 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:33.547 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:33.547 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:33.808 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:33.808 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:33.808 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:33.808 12:40:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5c36dd0-eae9-4645-a9fe-54fa75b2ceca 00:09:34.069 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b541e70-823f-406a-a8c4-3d0ae00ad31e 00:09:34.330 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.330 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.330 00:09:34.330 real 0m15.951s 00:09:34.330 user 0m15.539s 00:09:34.330 sys 0m1.434s 00:09:34.330 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.330 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:34.330 ************************************ 00:09:34.330 END TEST lvs_grow_clean 00:09:34.330 ************************************ 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.591 ************************************ 00:09:34.591 START TEST lvs_grow_dirty 00:09:34.591 ************************************ 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.591 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:34.853 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:34.853 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=24170085-8cc7-4469-8fc3-653cc367f258 00:09:34.853 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:34.853 12:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:35.112 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:35.112 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:35.112 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 24170085-8cc7-4469-8fc3-653cc367f258 lvol 150 00:09:35.372 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:35.372 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.372 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:35.372 [2024-11-28 12:40:05.417973] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:35.372 [2024-11-28 12:40:05.418016] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:35.372 true 00:09:35.372 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:35.372 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:35.633 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:35.633 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.894 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:35.894 12:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:36.156 [2024-11-28 12:40:06.090390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3206695 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3206695 /var/tmp/bdevperf.sock 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3206695 ']' 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.156 12:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.417 [2024-11-28 12:40:06.312083] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:36.417 [2024-11-28 12:40:06.312138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206695 ] 00:09:36.417 [2024-11-28 12:40:06.444518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.417 [2024-11-28 12:40:06.496760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.417 [2024-11-28 12:40:06.513023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.358 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.358 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:37.358 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:37.618 Nvme0n1 00:09:37.618 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:37.618 [ 00:09:37.618 { 00:09:37.618 "name": "Nvme0n1", 00:09:37.618 "aliases": [ 00:09:37.618 "e95d6d86-db0e-455c-8aff-f69b83a223ad" 00:09:37.618 ], 00:09:37.618 "product_name": "NVMe disk", 00:09:37.618 "block_size": 4096, 00:09:37.618 "num_blocks": 38912, 00:09:37.618 "uuid": "e95d6d86-db0e-455c-8aff-f69b83a223ad", 00:09:37.618 "numa_id": 0, 00:09:37.618 "assigned_rate_limits": { 00:09:37.618 "rw_ios_per_sec": 0, 00:09:37.618 "rw_mbytes_per_sec": 0, 00:09:37.618 "r_mbytes_per_sec": 0, 00:09:37.618 "w_mbytes_per_sec": 0 00:09:37.618 }, 00:09:37.618 "claimed": false, 00:09:37.618 "zoned": false, 00:09:37.618 "supported_io_types": { 00:09:37.618 "read": true, 00:09:37.618 "write": true, 00:09:37.618 "unmap": true, 00:09:37.618 "flush": true, 00:09:37.618 "reset": true, 00:09:37.618 "nvme_admin": true, 00:09:37.618 "nvme_io": true, 00:09:37.618 "nvme_io_md": false, 00:09:37.618 "write_zeroes": true, 00:09:37.619 "zcopy": false, 00:09:37.619 "get_zone_info": false, 00:09:37.619 "zone_management": false, 00:09:37.619 "zone_append": false, 00:09:37.619 "compare": true, 00:09:37.619 "compare_and_write": true, 00:09:37.619 "abort": true, 00:09:37.619 "seek_hole": false, 00:09:37.619 "seek_data": false, 00:09:37.619 "copy": true, 00:09:37.619 "nvme_iov_md": false 00:09:37.619 }, 00:09:37.619 "memory_domains": [ 00:09:37.619 { 00:09:37.619 "dma_device_id": "system", 00:09:37.619 "dma_device_type": 1 00:09:37.619 } 00:09:37.619 ], 00:09:37.619 "driver_specific": { 00:09:37.619 "nvme": [ 00:09:37.619 { 00:09:37.619 "trid": { 00:09:37.619 "trtype": "TCP", 00:09:37.619 "adrfam": "IPv4", 00:09:37.619 "traddr": "10.0.0.2", 00:09:37.619 "trsvcid": "4420", 00:09:37.619 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:37.619 }, 00:09:37.619 "ctrlr_data": { 00:09:37.619 "cntlid": 1, 00:09:37.619 "vendor_id": "0x8086", 00:09:37.619 "model_number": "SPDK bdev Controller", 00:09:37.619 "serial_number": "SPDK0", 00:09:37.619 "firmware_revision": "25.01", 00:09:37.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.619 "oacs": { 00:09:37.619 "security": 0, 00:09:37.619 "format": 0, 00:09:37.619 "firmware": 0, 00:09:37.619 "ns_manage": 0 00:09:37.619 }, 00:09:37.619 "multi_ctrlr": true, 00:09:37.619 "ana_reporting": false 00:09:37.619 }, 00:09:37.619 "vs": { 00:09:37.619 "nvme_version": "1.3" 00:09:37.619 }, 00:09:37.619 "ns_data": { 00:09:37.619 "id": 1, 00:09:37.619 "can_share": true 00:09:37.619 } 00:09:37.619 } 00:09:37.619 ], 00:09:37.619 "mp_policy": "active_passive" 00:09:37.619 } 00:09:37.619 } 00:09:37.619 ] 00:09:37.619 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3206898 00:09:37.619 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:37.619 12:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:37.879 Running I/O for 10 seconds... 00:09:38.823 Latency(us) 00:09:38.823 [2024-11-28T11:40:08.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.823 Nvme0n1 : 1.00 24721.00 96.57 0.00 0.00 0.00 0.00 0.00 00:09:38.823 [2024-11-28T11:40:08.950Z] =================================================================================================================== 00:09:38.823 [2024-11-28T11:40:08.950Z] Total : 24721.00 96.57 0.00 0.00 0.00 0.00 0.00 00:09:38.823 00:09:39.767 12:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:39.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.767 Nvme0n1 : 2.00 24967.50 97.53 0.00 0.00 0.00 0.00 0.00 00:09:39.767 [2024-11-28T11:40:09.894Z] =================================================================================================================== 00:09:39.767 [2024-11-28T11:40:09.894Z] Total : 24967.50 97.53 0.00 0.00 0.00 0.00 0.00 00:09:39.767 00:09:39.767 true 00:09:39.767 12:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:39.767 12:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.028 12:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.028 12:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.028 12:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3206898 00:09:40.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.970 Nvme0n1 : 3.00 25049.00 97.85 0.00 0.00 0.00 0.00 0.00 00:09:40.970 [2024-11-28T11:40:11.097Z] =================================================================================================================== 00:09:40.970 [2024-11-28T11:40:11.097Z] Total : 25049.00 97.85 0.00 0.00 0.00 0.00 0.00 00:09:40.970 00:09:41.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.912 Nvme0n1 : 4.00 25110.50 98.09 0.00 0.00 0.00 0.00 0.00 00:09:41.912 [2024-11-28T11:40:12.039Z] =================================================================================================================== 00:09:41.912 [2024-11-28T11:40:12.039Z] Total : 25110.50 98.09 0.00 0.00 0.00 0.00 0.00 00:09:41.912 00:09:42.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.855 Nvme0n1 : 5.00 25166.60 98.31 0.00 0.00 0.00 0.00 0.00 00:09:42.855 [2024-11-28T11:40:12.982Z] =================================================================================================================== 00:09:42.855 [2024-11-28T11:40:12.982Z] Total : 25166.60 98.31 0.00 0.00 0.00 0.00 0.00 00:09:42.855 00:09:43.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.800 Nvme0n1 : 6.00 25195.83 98.42 0.00 0.00 0.00 0.00 0.00 00:09:43.800 [2024-11-28T11:40:13.927Z] =================================================================================================================== 00:09:43.800 [2024-11-28T11:40:13.927Z] Total : 25195.83 98.42 0.00 0.00 0.00 0.00 0.00 00:09:43.800 00:09:44.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.748 Nvme0n1 : 7.00 25226.00 98.54 0.00 0.00 0.00 0.00 0.00 00:09:44.748 [2024-11-28T11:40:14.875Z] =================================================================================================================== 00:09:44.748 [2024-11-28T11:40:14.875Z] Total : 25226.00 98.54 0.00 0.00 0.00 0.00 0.00 00:09:44.748 00:09:45.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.687 Nvme0n1 : 8.00 25248.50 98.63 0.00 0.00 0.00 0.00 0.00 00:09:45.687 [2024-11-28T11:40:15.814Z] =================================================================================================================== 00:09:45.687 [2024-11-28T11:40:15.814Z] Total : 25248.50 98.63 0.00 0.00 0.00 0.00 0.00 00:09:45.687 00:09:47.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.068 Nvme0n1 : 9.00 25265.67 98.69 0.00 0.00 0.00 0.00 0.00 00:09:47.068 [2024-11-28T11:40:17.195Z] =================================================================================================================== 00:09:47.068 [2024-11-28T11:40:17.195Z] Total : 25265.67 98.69 0.00 0.00 0.00 0.00 0.00 00:09:47.068 00:09:48.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.008 Nvme0n1 : 10.00 25286.30 98.77 0.00 0.00 0.00 0.00 0.00 00:09:48.008 [2024-11-28T11:40:18.135Z] =================================================================================================================== 00:09:48.008 [2024-11-28T11:40:18.135Z] Total : 25286.30 98.77 0.00 0.00 0.00 0.00 0.00 00:09:48.008 00:09:48.008 00:09:48.008 Latency(us) 00:09:48.008 [2024-11-28T11:40:18.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.008 Nvme0n1 : 10.00 25286.53 98.78 0.00 0.00 5058.86 3092.87 16312.84 00:09:48.008 [2024-11-28T11:40:18.135Z] =================================================================================================================== 00:09:48.008 [2024-11-28T11:40:18.135Z] Total : 25286.53 98.78 0.00 0.00 5058.86 3092.87 16312.84 00:09:48.008 { 00:09:48.008 "results": [ 00:09:48.008 { 00:09:48.008 "job": "Nvme0n1", 00:09:48.008 "core_mask": "0x2", 00:09:48.008 "workload": "randwrite", 00:09:48.008 "status": "finished", 00:09:48.008 "queue_depth": 128, 00:09:48.008 "io_size": 4096, 00:09:48.008 "runtime": 10.004971, 00:09:48.008 "iops": 25286.53006590424, 00:09:48.008 "mibps": 98.77550806993844, 00:09:48.008 "io_failed": 0, 00:09:48.008 "io_timeout": 0, 00:09:48.008 "avg_latency_us": 5058.863694934228, 00:09:48.008 "min_latency_us": 3092.870030070164, 00:09:48.008 "max_latency_us": 16312.836618777146 00:09:48.008 } 00:09:48.008 ], 00:09:48.008 "core_count": 1 00:09:48.008 } 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3206695 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3206695 ']' 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3206695 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206695 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206695' 00:09:48.008 killing process with pid 3206695 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3206695 00:09:48.008 Received shutdown signal, test time was about 10.000000 seconds 00:09:48.008 00:09:48.008 Latency(us) 00:09:48.008 [2024-11-28T11:40:18.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.008 [2024-11-28T11:40:18.135Z] =================================================================================================================== 00:09:48.008 [2024-11-28T11:40:18.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3206695 00:09:48.008 12:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.269 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:48.269 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:48.269 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3202830 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3202830 00:09:48.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3202830 Killed "${NVMF_APP[@]}" "$@" 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3209191 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3209191 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3209191 ']' 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.530 12:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:48.530 [2024-11-28 12:40:18.619735] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:48.530 [2024-11-28 12:40:18.619792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.791 [2024-11-28 12:40:18.761126] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:48.792 [2024-11-28 12:40:18.813863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.792 [2024-11-28 12:40:18.835072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.792 [2024-11-28 12:40:18.835115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.792 [2024-11-28 12:40:18.835121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.792 [2024-11-28 12:40:18.835126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.792 [2024-11-28 12:40:18.835130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.792 [2024-11-28 12:40:18.835740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.363 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:49.625 [2024-11-28 12:40:19.600791] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:49.625 [2024-11-28 12:40:19.600870] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:49.625 [2024-11-28 12:40:19.600892] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.625 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:49.885 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e95d6d86-db0e-455c-8aff-f69b83a223ad -t 2000 00:09:49.885 [ 00:09:49.885 { 00:09:49.885 "name": "e95d6d86-db0e-455c-8aff-f69b83a223ad", 00:09:49.885 "aliases": [ 00:09:49.885 "lvs/lvol" 00:09:49.885 ], 00:09:49.885 "product_name": "Logical Volume", 00:09:49.885 "block_size": 4096, 00:09:49.885 "num_blocks": 38912, 00:09:49.885 "uuid": "e95d6d86-db0e-455c-8aff-f69b83a223ad", 00:09:49.885 "assigned_rate_limits": { 00:09:49.885 "rw_ios_per_sec": 0, 00:09:49.885 "rw_mbytes_per_sec": 0, 00:09:49.885 "r_mbytes_per_sec": 0, 00:09:49.885 "w_mbytes_per_sec": 0 00:09:49.885 }, 00:09:49.885 "claimed": false, 00:09:49.885 "zoned": false, 00:09:49.885 "supported_io_types": { 00:09:49.885 "read": true, 00:09:49.885 "write": true, 00:09:49.885 "unmap": true, 00:09:49.885 "flush": false, 00:09:49.885 "reset": true, 00:09:49.885 "nvme_admin": false, 00:09:49.885 "nvme_io": false, 00:09:49.885 "nvme_io_md": false, 00:09:49.885 "write_zeroes": true, 00:09:49.885 "zcopy": false, 00:09:49.885 "get_zone_info": false, 00:09:49.885 "zone_management": false, 00:09:49.885 "zone_append": false, 00:09:49.885 "compare": false, 00:09:49.885 "compare_and_write": false, 00:09:49.885 "abort": false, 00:09:49.885 "seek_hole": true, 00:09:49.885 "seek_data": true, 00:09:49.885 "copy": false, 00:09:49.885 "nvme_iov_md": false 00:09:49.885 }, 00:09:49.885 "driver_specific": { 00:09:49.885 "lvol": { 00:09:49.885 "lvol_store_uuid": "24170085-8cc7-4469-8fc3-653cc367f258", 00:09:49.885 "base_bdev": "aio_bdev", 00:09:49.885 "thin_provision": false, 00:09:49.885 "num_allocated_clusters": 38, 00:09:49.885 "snapshot": false, 00:09:49.885 "clone": false, 00:09:49.885 "esnap_clone": false 00:09:49.885 } 00:09:49.885 } 00:09:49.885 } 00:09:49.885 ] 00:09:49.885 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:49.885 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:49.885 12:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:50.146 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:50.146 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:50.146 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:50.406 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:50.406 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:50.406 [2024-11-28 12:40:20.439438] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:50.406 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:50.406 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:50.406 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:50.406 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:50.407 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:50.667 request: 00:09:50.667 { 00:09:50.667 "uuid": "24170085-8cc7-4469-8fc3-653cc367f258", 00:09:50.667 "method": "bdev_lvol_get_lvstores", 00:09:50.667 "req_id": 1 00:09:50.667 } 00:09:50.667 Got JSON-RPC error response 00:09:50.667 response: 00:09:50.667 { 00:09:50.667 "code": -19, 00:09:50.667 "message": "No such device" 00:09:50.667 } 00:09:50.667 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:50.667 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:50.667 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:50.667 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:50.667 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.928 aio_bdev 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.928 12:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e95d6d86-db0e-455c-8aff-f69b83a223ad -t 2000 00:09:51.188 [ 00:09:51.188 { 00:09:51.188 "name": "e95d6d86-db0e-455c-8aff-f69b83a223ad", 00:09:51.188 "aliases": [ 00:09:51.188 "lvs/lvol" 00:09:51.188 ], 00:09:51.188 "product_name": "Logical Volume", 00:09:51.189 "block_size": 4096, 00:09:51.189 "num_blocks": 38912, 00:09:51.189 "uuid": "e95d6d86-db0e-455c-8aff-f69b83a223ad", 00:09:51.189 "assigned_rate_limits": { 00:09:51.189 "rw_ios_per_sec": 0, 00:09:51.189 "rw_mbytes_per_sec": 0, 00:09:51.189 "r_mbytes_per_sec": 0, 00:09:51.189 "w_mbytes_per_sec": 0 00:09:51.189 }, 00:09:51.189 "claimed": false, 00:09:51.189 "zoned": false, 00:09:51.189 "supported_io_types": { 00:09:51.189 "read": true, 00:09:51.189 "write": true, 00:09:51.189 "unmap": true, 00:09:51.189 "flush": false, 00:09:51.189 "reset": true, 00:09:51.189 "nvme_admin": false, 00:09:51.189 "nvme_io": false, 00:09:51.189 "nvme_io_md": false, 00:09:51.189 "write_zeroes": true, 00:09:51.189 "zcopy": false, 00:09:51.189 "get_zone_info": false, 00:09:51.189 "zone_management": false, 00:09:51.189 "zone_append": false, 00:09:51.189 "compare": false, 00:09:51.189 "compare_and_write": false, 00:09:51.189 "abort": false, 00:09:51.189 "seek_hole": true, 00:09:51.189 "seek_data": true, 00:09:51.189 "copy": false, 00:09:51.189 "nvme_iov_md": false 00:09:51.189 }, 00:09:51.189 "driver_specific": { 00:09:51.189 "lvol": { 00:09:51.189 "lvol_store_uuid": "24170085-8cc7-4469-8fc3-653cc367f258", 00:09:51.189 "base_bdev": "aio_bdev", 00:09:51.189 "thin_provision": false, 00:09:51.189 "num_allocated_clusters": 38, 00:09:51.189 "snapshot": false, 00:09:51.189 "clone": false, 00:09:51.189 "esnap_clone": false 00:09:51.189 } 00:09:51.189 } 00:09:51.189 } 00:09:51.189 ] 00:09:51.189 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:51.189 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:51.189 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:51.449 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:51.449 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:51.449 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:51.449 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:51.449 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e95d6d86-db0e-455c-8aff-f69b83a223ad 00:09:51.710 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24170085-8cc7-4469-8fc3-653cc367f258 00:09:51.970 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:51.971 00:09:51.971 real 0m17.555s 00:09:51.971 user 0m45.739s 00:09:51.971 sys 0m3.081s 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:51.971 ************************************ 00:09:51.971 END TEST lvs_grow_dirty 00:09:51.971 ************************************ 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:51.971 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:52.231 nvmf_trace.0 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.231 rmmod nvme_tcp 00:09:52.231 rmmod nvme_fabrics 00:09:52.231 rmmod nvme_keyring 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:52.231 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3209191 ']' 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3209191 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3209191 ']' 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3209191 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3209191 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3209191' 00:09:52.232 killing process with pid 3209191 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3209191 00:09:52.232 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3209191 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.492 12:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.406 00:09:54.406 real 0m44.936s 00:09:54.406 user 1m7.510s 00:09:54.406 sys 0m10.713s 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:54.406 ************************************ 00:09:54.406 END TEST nvmf_lvs_grow 00:09:54.406 ************************************ 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.406 12:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.668 ************************************ 00:09:54.668 START TEST nvmf_bdev_io_wait 00:09:54.668 ************************************ 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:54.668 * Looking for test storage... 00:09:54.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.668 --rc genhtml_branch_coverage=1 00:09:54.668 --rc genhtml_function_coverage=1 00:09:54.668 --rc genhtml_legend=1 00:09:54.668 --rc geninfo_all_blocks=1 00:09:54.668 --rc geninfo_unexecuted_blocks=1 00:09:54.668 00:09:54.668 ' 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.668 --rc genhtml_branch_coverage=1 00:09:54.668 --rc genhtml_function_coverage=1 00:09:54.668 --rc genhtml_legend=1 00:09:54.668 --rc geninfo_all_blocks=1 00:09:54.668 --rc geninfo_unexecuted_blocks=1 00:09:54.668 00:09:54.668 ' 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.668 --rc genhtml_branch_coverage=1 00:09:54.668 --rc genhtml_function_coverage=1 00:09:54.668 --rc genhtml_legend=1 00:09:54.668 --rc geninfo_all_blocks=1 00:09:54.668 --rc geninfo_unexecuted_blocks=1 00:09:54.668 00:09:54.668 ' 00:09:54.668 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.668 --rc genhtml_branch_coverage=1 00:09:54.668 --rc genhtml_function_coverage=1 00:09:54.668 --rc genhtml_legend=1 00:09:54.668 --rc geninfo_all_blocks=1 00:09:54.668 --rc geninfo_unexecuted_blocks=1 00:09:54.668 00:09:54.669 ' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.669 12:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.813 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:02.814 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:02.814 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:02.814 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:02.814 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.814 12:40:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:10:02.814 00:10:02.814 --- 10.0.0.2 ping statistics --- 00:10:02.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.814 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:10:02.814 00:10:02.814 --- 10.0.0.1 ping statistics --- 00:10:02.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.814 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3214245 00:10:02.814 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3214245 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3214245 ']' 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.815 12:40:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.815 [2024-11-28 12:40:32.425712] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:02.815 [2024-11-28 12:40:32.425788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.815 [2024-11-28 12:40:32.570033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:02.815 [2024-11-28 12:40:32.627274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.815 [2024-11-28 12:40:32.656621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.815 [2024-11-28 12:40:32.656668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.815 [2024-11-28 12:40:32.656677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.815 [2024-11-28 12:40:32.656684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.815 [2024-11-28 12:40:32.656691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.815 [2024-11-28 12:40:32.658995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.815 [2024-11-28 12:40:32.659155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.815 [2024-11-28 12:40:32.659318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.815 [2024-11-28 12:40:32.659319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.388 [2024-11-28 12:40:33.368784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.388 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.389 Malloc0 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.389 [2024-11-28 12:40:33.434102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3214311 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3214314 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.389 { 00:10:03.389 "params": { 00:10:03.389 "name": "Nvme$subsystem", 00:10:03.389 "trtype": "$TEST_TRANSPORT", 00:10:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.389 "adrfam": "ipv4", 00:10:03.389 "trsvcid": "$NVMF_PORT", 00:10:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.389 "hdgst": ${hdgst:-false}, 00:10:03.389 "ddgst": ${ddgst:-false} 00:10:03.389 }, 00:10:03.389 "method": "bdev_nvme_attach_controller" 00:10:03.389 } 00:10:03.389 EOF 00:10:03.389 )") 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3214317 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.389 { 00:10:03.389 "params": { 00:10:03.389 "name": "Nvme$subsystem", 00:10:03.389 "trtype": "$TEST_TRANSPORT", 00:10:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.389 "adrfam": "ipv4", 00:10:03.389 "trsvcid": "$NVMF_PORT", 00:10:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.389 "hdgst": ${hdgst:-false}, 00:10:03.389 "ddgst": ${ddgst:-false} 00:10:03.389 }, 00:10:03.389 "method": "bdev_nvme_attach_controller" 00:10:03.389 } 00:10:03.389 EOF 00:10:03.389 )") 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3214320 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.389 { 00:10:03.389 "params": { 00:10:03.389 "name": "Nvme$subsystem", 00:10:03.389 "trtype": "$TEST_TRANSPORT", 00:10:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.389 "adrfam": "ipv4", 00:10:03.389 "trsvcid": "$NVMF_PORT", 00:10:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.389 "hdgst": ${hdgst:-false}, 00:10:03.389 "ddgst": ${ddgst:-false} 00:10:03.389 }, 00:10:03.389 "method": "bdev_nvme_attach_controller" 00:10:03.389 } 00:10:03.389 EOF 00:10:03.389 )") 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.389 { 00:10:03.389 "params": { 00:10:03.389 "name": "Nvme$subsystem", 00:10:03.389 "trtype": "$TEST_TRANSPORT", 00:10:03.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.389 "adrfam": "ipv4", 00:10:03.389 "trsvcid": "$NVMF_PORT", 00:10:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.389 "hdgst": ${hdgst:-false}, 00:10:03.389 "ddgst": ${ddgst:-false} 00:10:03.389 }, 00:10:03.389 "method": "bdev_nvme_attach_controller" 00:10:03.389 } 00:10:03.389 EOF 00:10:03.389 )") 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3214311 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.389 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.389 "params": { 00:10:03.389 "name": "Nvme1", 00:10:03.389 "trtype": "tcp", 00:10:03.389 "traddr": "10.0.0.2", 00:10:03.389 "adrfam": "ipv4", 00:10:03.389 "trsvcid": "4420", 00:10:03.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.390 "hdgst": false, 00:10:03.390 "ddgst": false 00:10:03.390 }, 00:10:03.390 "method": "bdev_nvme_attach_controller" 00:10:03.390 }' 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.390 "params": { 00:10:03.390 "name": "Nvme1", 00:10:03.390 "trtype": "tcp", 00:10:03.390 "traddr": "10.0.0.2", 00:10:03.390 "adrfam": "ipv4", 00:10:03.390 "trsvcid": "4420", 00:10:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.390 "hdgst": false, 00:10:03.390 "ddgst": false 00:10:03.390 }, 00:10:03.390 "method": "bdev_nvme_attach_controller" 00:10:03.390 }' 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.390 "params": { 00:10:03.390 "name": "Nvme1", 00:10:03.390 "trtype": "tcp", 00:10:03.390 "traddr": "10.0.0.2", 00:10:03.390 "adrfam": "ipv4", 00:10:03.390 "trsvcid": "4420", 00:10:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.390 "hdgst": false, 00:10:03.390 "ddgst": false 00:10:03.390 }, 00:10:03.390 "method": "bdev_nvme_attach_controller" 00:10:03.390 }' 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.390 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.390 "params": { 00:10:03.390 "name": "Nvme1", 00:10:03.390 "trtype": "tcp", 00:10:03.390 "traddr": "10.0.0.2", 00:10:03.390 "adrfam": "ipv4", 00:10:03.390 "trsvcid": "4420", 00:10:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.390 "hdgst": false, 00:10:03.390 "ddgst": false 00:10:03.390 }, 00:10:03.390 "method": "bdev_nvme_attach_controller" 00:10:03.390 }' 00:10:03.390 [2024-11-28 12:40:33.485683] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:03.390 [2024-11-28 12:40:33.485757] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:03.390 [2024-11-28 12:40:33.495871] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:03.390 [2024-11-28 12:40:33.495935] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:03.390 [2024-11-28 12:40:33.499096] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:03.390 [2024-11-28 12:40:33.499157] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:03.390 [2024-11-28 12:40:33.501758] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:03.390 [2024-11-28 12:40:33.501828] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:03.652 [2024-11-28 12:40:33.745093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.914 [2024-11-28 12:40:33.805456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.914 [2024-11-28 12:40:33.811475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.914 [2024-11-28 12:40:33.823545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.914 [2024-11-28 12:40:33.870532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.914 [2024-11-28 12:40:33.877228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.914 [2024-11-28 12:40:33.886891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.914 [2024-11-28 12:40:33.938723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.914 [2024-11-28 12:40:33.954552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.914 [2024-11-28 12:40:33.972020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.914 Running I/O for 1 seconds... 00:10:03.914 [2024-11-28 12:40:34.035600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.176 Running I/O for 1 seconds... 00:10:04.176 [2024-11-28 12:40:34.054613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:04.176 Running I/O for 1 seconds... 00:10:04.176 Running I/O for 1 seconds... 00:10:05.119 12072.00 IOPS, 47.16 MiB/s 00:10:05.119 Latency(us) 00:10:05.119 [2024-11-28T11:40:35.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.119 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:05.119 Nvme1n1 : 1.01 12129.39 47.38 0.00 0.00 10515.26 5391.99 18830.93 00:10:05.119 [2024-11-28T11:40:35.246Z] =================================================================================================================== 00:10:05.119 [2024-11-28T11:40:35.246Z] Total : 12129.39 47.38 0.00 0.00 10515.26 5391.99 18830.93 00:10:05.119 6253.00 IOPS, 24.43 MiB/s 00:10:05.119 Latency(us) 00:10:05.119 [2024-11-28T11:40:35.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.119 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:05.119 Nvme1n1 : 1.02 6276.46 24.52 0.00 0.00 20219.23 7006.86 33282.57 00:10:05.119 [2024-11-28T11:40:35.246Z] =================================================================================================================== 00:10:05.119 [2024-11-28T11:40:35.246Z] Total : 6276.46 24.52 0.00 0.00 20219.23 7006.86 33282.57 00:10:05.119 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3214314 00:10:05.119 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3214317 00:10:05.119 6708.00 IOPS, 26.20 MiB/s 00:10:05.119 Latency(us) 00:10:05.119 [2024-11-28T11:40:35.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.119 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.119 Nvme1n1 : 1.01 6833.19 26.69 0.00 0.00 18678.18 4269.80 42698.03 00:10:05.119 [2024-11-28T11:40:35.246Z] =================================================================================================================== 00:10:05.119 [2024-11-28T11:40:35.246Z] Total : 6833.19 26.69 0.00 0.00 18678.18 4269.80 42698.03 00:10:05.119 180384.00 IOPS, 704.62 MiB/s 00:10:05.119 Latency(us) 00:10:05.119 [2024-11-28T11:40:35.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.119 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:05.119 Nvme1n1 : 1.00 180022.95 703.21 0.00 0.00 706.77 299.37 1970.68 00:10:05.119 [2024-11-28T11:40:35.246Z] =================================================================================================================== 00:10:05.119 [2024-11-28T11:40:35.246Z] Total : 180022.95 703.21 0.00 0.00 706.77 299.37 1970.68 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3214320 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.380 rmmod nvme_tcp 00:10:05.380 rmmod nvme_fabrics 00:10:05.380 rmmod nvme_keyring 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3214245 ']' 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3214245 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3214245 ']' 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3214245 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3214245 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3214245' 00:10:05.380 killing process with pid 3214245 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3214245 00:10:05.380 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3214245 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.642 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.194 00:10:08.194 real 0m13.159s 00:10:08.194 user 0m18.846s 00:10:08.194 sys 0m7.573s 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.194 ************************************ 00:10:08.194 END TEST nvmf_bdev_io_wait 00:10:08.194 ************************************ 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.194 ************************************ 00:10:08.194 START TEST nvmf_queue_depth 00:10:08.194 ************************************ 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.194 * Looking for test storage... 00:10:08.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.194 --rc genhtml_branch_coverage=1 00:10:08.194 --rc genhtml_function_coverage=1 00:10:08.194 --rc genhtml_legend=1 00:10:08.194 --rc geninfo_all_blocks=1 00:10:08.194 --rc geninfo_unexecuted_blocks=1 00:10:08.194 00:10:08.194 ' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.194 --rc genhtml_branch_coverage=1 00:10:08.194 --rc genhtml_function_coverage=1 00:10:08.194 --rc genhtml_legend=1 00:10:08.194 --rc geninfo_all_blocks=1 00:10:08.194 --rc geninfo_unexecuted_blocks=1 00:10:08.194 00:10:08.194 ' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.194 --rc genhtml_branch_coverage=1 00:10:08.194 --rc genhtml_function_coverage=1 00:10:08.194 --rc genhtml_legend=1 00:10:08.194 --rc geninfo_all_blocks=1 00:10:08.194 --rc geninfo_unexecuted_blocks=1 00:10:08.194 00:10:08.194 ' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.194 --rc genhtml_branch_coverage=1 00:10:08.194 --rc genhtml_function_coverage=1 00:10:08.194 --rc genhtml_legend=1 00:10:08.194 --rc geninfo_all_blocks=1 00:10:08.194 --rc geninfo_unexecuted_blocks=1 00:10:08.194 00:10:08.194 ' 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.194 12:40:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.195 12:40:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:16.353 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:16.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:16.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:16.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:16.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:10:16.354 00:10:16.354 --- 10.0.0.2 ping statistics --- 00:10:16.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.354 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:10:16.354 00:10:16.354 --- 10.0.0.1 ping statistics --- 00:10:16.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.354 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:16.354 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3219006 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3219006 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3219006 ']' 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.355 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.355 [2024-11-28 12:40:45.661658] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:16.355 [2024-11-28 12:40:45.661725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.355 [2024-11-28 12:40:45.808141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:16.355 [2024-11-28 12:40:45.868841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.355 [2024-11-28 12:40:45.895245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.355 [2024-11-28 12:40:45.895289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.355 [2024-11-28 12:40:45.895297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.355 [2024-11-28 12:40:45.895304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.355 [2024-11-28 12:40:45.895311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.355 [2024-11-28 12:40:45.896029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.616 [2024-11-28 12:40:46.538614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.616 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 Malloc0 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 [2024-11-28 12:40:46.599508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3219356 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3219356 /var/tmp/bdevperf.sock 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3219356 ']' 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:16.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.617 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.617 [2024-11-28 12:40:46.659700] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:16.617 [2024-11-28 12:40:46.659764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219356 ] 00:10:16.878 [2024-11-28 12:40:46.796247] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:16.878 [2024-11-28 12:40:46.853648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.878 [2024-11-28 12:40:46.881593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.451 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.451 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:17.451 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:17.451 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.451 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:17.713 NVMe0n1 00:10:17.713 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.713 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:17.713 Running I/O for 10 seconds... 00:10:19.673 9067.00 IOPS, 35.42 MiB/s [2024-11-28T11:40:51.248Z] 10240.00 IOPS, 40.00 MiB/s [2024-11-28T11:40:51.934Z] 10602.00 IOPS, 41.41 MiB/s [2024-11-28T11:40:52.875Z] 11011.75 IOPS, 43.01 MiB/s [2024-11-28T11:40:53.816Z] 11471.80 IOPS, 44.81 MiB/s [2024-11-28T11:40:55.200Z] 11772.33 IOPS, 45.99 MiB/s [2024-11-28T11:40:56.142Z] 12007.86 IOPS, 46.91 MiB/s [2024-11-28T11:40:57.085Z] 12224.50 IOPS, 47.75 MiB/s [2024-11-28T11:40:58.030Z] 12400.44 IOPS, 48.44 MiB/s [2024-11-28T11:40:58.030Z] 12558.70 IOPS, 49.06 MiB/s 00:10:27.903 Latency(us) 00:10:27.903 [2024-11-28T11:40:58.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.903 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:27.903 Verification LBA range: start 0x0 length 0x4000 00:10:27.903 NVMe0n1 : 10.05 12585.02 49.16 0.00 0.00 81054.44 18393.00 71382.35 00:10:27.903 [2024-11-28T11:40:58.030Z] =================================================================================================================== 00:10:27.903 [2024-11-28T11:40:58.030Z] Total : 12585.02 49.16 0.00 0.00 81054.44 18393.00 71382.35 00:10:27.903 { 00:10:27.903 "results": [ 00:10:27.903 { 00:10:27.903 "job": "NVMe0n1", 00:10:27.903 "core_mask": "0x1", 00:10:27.903 "workload": "verify", 00:10:27.903 "status": "finished", 00:10:27.903 "verify_range": { 00:10:27.903 "start": 0, 00:10:27.903 "length": 16384 00:10:27.903 }, 00:10:27.903 "queue_depth": 1024, 00:10:27.903 "io_size": 4096, 00:10:27.903 "runtime": 10.05306, 00:10:27.903 "iops": 12585.0238633809, 00:10:27.903 "mibps": 49.16024946633164, 00:10:27.903 "io_failed": 0, 00:10:27.903 "io_timeout": 0, 00:10:27.903 "avg_latency_us": 81054.44453811187, 00:10:27.903 "min_latency_us": 18392.99699298363, 00:10:27.903 "max_latency_us": 71382.3454727698 00:10:27.903 } 00:10:27.903 ], 00:10:27.903 "core_count": 1 00:10:27.903 } 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3219356 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3219356 ']' 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3219356 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3219356 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3219356' 00:10:27.903 killing process with pid 3219356 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3219356 00:10:27.903 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.903 00:10:27.903 Latency(us) 00:10:27.903 [2024-11-28T11:40:58.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.903 [2024-11-28T11:40:58.030Z] =================================================================================================================== 00:10:27.903 [2024-11-28T11:40:58.030Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.903 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3219356 00:10:27.903 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:27.903 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:27.903 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.903 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.165 rmmod nvme_tcp 00:10:28.165 rmmod nvme_fabrics 00:10:28.165 rmmod nvme_keyring 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3219006 ']' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3219006 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3219006 ']' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3219006 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3219006 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3219006' 00:10:28.165 killing process with pid 3219006 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3219006 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3219006 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.165 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.717 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.717 00:10:30.717 real 0m22.567s 00:10:30.717 user 0m25.593s 00:10:30.717 sys 0m7.149s 00:10:30.717 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.717 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.717 ************************************ 00:10:30.717 END TEST nvmf_queue_depth 00:10:30.717 ************************************ 00:10:30.717 12:41:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:30.717 12:41:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.717 12:41:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.718 ************************************ 00:10:30.718 START TEST nvmf_target_multipath 00:10:30.718 ************************************ 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:30.718 * Looking for test storage... 00:10:30.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.718 --rc genhtml_branch_coverage=1 00:10:30.718 --rc genhtml_function_coverage=1 00:10:30.718 --rc genhtml_legend=1 00:10:30.718 --rc geninfo_all_blocks=1 00:10:30.718 --rc geninfo_unexecuted_blocks=1 00:10:30.718 00:10:30.718 ' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.718 --rc genhtml_branch_coverage=1 00:10:30.718 --rc genhtml_function_coverage=1 00:10:30.718 --rc genhtml_legend=1 00:10:30.718 --rc geninfo_all_blocks=1 00:10:30.718 --rc geninfo_unexecuted_blocks=1 00:10:30.718 00:10:30.718 ' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.718 --rc genhtml_branch_coverage=1 00:10:30.718 --rc genhtml_function_coverage=1 00:10:30.718 --rc genhtml_legend=1 00:10:30.718 --rc geninfo_all_blocks=1 00:10:30.718 --rc geninfo_unexecuted_blocks=1 00:10:30.718 00:10:30.718 ' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.718 --rc genhtml_branch_coverage=1 00:10:30.718 --rc genhtml_function_coverage=1 00:10:30.718 --rc genhtml_legend=1 00:10:30.718 --rc geninfo_all_blocks=1 00:10:30.718 --rc geninfo_unexecuted_blocks=1 00:10:30.718 00:10:30.718 ' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.718 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.719 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:38.864 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.864 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:38.865 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:38.865 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:38.865 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.865 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:10:38.865 00:10:38.865 --- 10.0.0.2 ping statistics --- 00:10:38.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.865 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:10:38.865 00:10:38.865 --- 10.0.0.1 ping statistics --- 00:10:38.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.865 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:38.865 only one NIC for nvmf test 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.865 rmmod nvme_tcp 00:10:38.865 rmmod nvme_fabrics 00:10:38.865 rmmod nvme_keyring 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.865 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.783 00:10:40.783 real 0m10.018s 00:10:40.783 user 0m2.195s 00:10:40.783 sys 0m5.767s 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:40.783 ************************************ 00:10:40.783 END TEST nvmf_target_multipath 00:10:40.783 ************************************ 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.783 ************************************ 00:10:40.783 START TEST nvmf_zcopy 00:10:40.783 ************************************ 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.783 * Looking for test storage... 00:10:40.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.783 --rc genhtml_branch_coverage=1 00:10:40.783 --rc genhtml_function_coverage=1 00:10:40.783 --rc genhtml_legend=1 00:10:40.783 --rc geninfo_all_blocks=1 00:10:40.783 --rc geninfo_unexecuted_blocks=1 00:10:40.783 00:10:40.783 ' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.783 --rc genhtml_branch_coverage=1 00:10:40.783 --rc genhtml_function_coverage=1 00:10:40.783 --rc genhtml_legend=1 00:10:40.783 --rc geninfo_all_blocks=1 00:10:40.783 --rc geninfo_unexecuted_blocks=1 00:10:40.783 00:10:40.783 ' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.783 --rc genhtml_branch_coverage=1 00:10:40.783 --rc genhtml_function_coverage=1 00:10:40.783 --rc genhtml_legend=1 00:10:40.783 --rc geninfo_all_blocks=1 00:10:40.783 --rc geninfo_unexecuted_blocks=1 00:10:40.783 00:10:40.783 ' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.783 --rc genhtml_branch_coverage=1 00:10:40.783 --rc genhtml_function_coverage=1 00:10:40.783 --rc genhtml_legend=1 00:10:40.783 --rc geninfo_all_blocks=1 00:10:40.783 --rc geninfo_unexecuted_blocks=1 00:10:40.783 00:10:40.783 ' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.783 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.784 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.931 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:48.932 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:48.932 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.932 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:48.932 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:48.932 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:10:48.932 00:10:48.932 --- 10.0.0.2 ping statistics --- 00:10:48.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.932 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:10:48.932 00:10:48.932 --- 10.0.0.1 ping statistics --- 00:10:48.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.932 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.932 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3230059 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3230059 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3230059 ']' 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.933 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.933 [2024-11-28 12:41:18.427273] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:48.933 [2024-11-28 12:41:18.427345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.933 [2024-11-28 12:41:18.570661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:48.933 [2024-11-28 12:41:18.629697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.933 [2024-11-28 12:41:18.655273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.933 [2024-11-28 12:41:18.655316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.933 [2024-11-28 12:41:18.655325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.933 [2024-11-28 12:41:18.655332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.933 [2024-11-28 12:41:18.655338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.933 [2024-11-28 12:41:18.656074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.194 [2024-11-28 12:41:19.305711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.194 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 [2024-11-28 12:41:19.329988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 malloc0 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:49.456 { 00:10:49.456 "params": { 00:10:49.456 "name": "Nvme$subsystem", 00:10:49.456 "trtype": "$TEST_TRANSPORT", 00:10:49.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.456 "adrfam": "ipv4", 00:10:49.456 "trsvcid": "$NVMF_PORT", 00:10:49.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.456 "hdgst": ${hdgst:-false}, 00:10:49.456 "ddgst": ${ddgst:-false} 00:10:49.456 }, 00:10:49.456 "method": "bdev_nvme_attach_controller" 00:10:49.456 } 00:10:49.456 EOF 00:10:49.456 )") 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:49.456 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:49.456 "params": { 00:10:49.456 "name": "Nvme1", 00:10:49.456 "trtype": "tcp", 00:10:49.456 "traddr": "10.0.0.2", 00:10:49.456 "adrfam": "ipv4", 00:10:49.456 "trsvcid": "4420", 00:10:49.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.456 "hdgst": false, 00:10:49.456 "ddgst": false 00:10:49.456 }, 00:10:49.456 "method": "bdev_nvme_attach_controller" 00:10:49.456 }' 00:10:49.456 [2024-11-28 12:41:19.432399] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:49.456 [2024-11-28 12:41:19.432464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230262 ] 00:10:49.456 [2024-11-28 12:41:19.569057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.718 [2024-11-28 12:41:19.628566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.718 [2024-11-28 12:41:19.656914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.979 Running I/O for 10 seconds... 00:10:51.867 6406.00 IOPS, 50.05 MiB/s [2024-11-28T11:41:23.380Z] 7730.00 IOPS, 60.39 MiB/s [2024-11-28T11:41:24.322Z] 8393.00 IOPS, 65.57 MiB/s [2024-11-28T11:41:25.268Z] 8730.50 IOPS, 68.21 MiB/s [2024-11-28T11:41:26.219Z] 8933.80 IOPS, 69.80 MiB/s [2024-11-28T11:41:27.160Z] 9069.83 IOPS, 70.86 MiB/s [2024-11-28T11:41:28.101Z] 9162.29 IOPS, 71.58 MiB/s [2024-11-28T11:41:29.043Z] 9235.50 IOPS, 72.15 MiB/s [2024-11-28T11:41:29.985Z] 9291.44 IOPS, 72.59 MiB/s [2024-11-28T11:41:30.245Z] 9337.50 IOPS, 72.95 MiB/s 00:11:00.118 Latency(us) 00:11:00.118 [2024-11-28T11:41:30.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.118 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:00.118 Verification LBA range: start 0x0 length 0x1000 00:11:00.118 Nvme1n1 : 10.01 9338.96 72.96 0.00 0.00 13658.56 1621.70 29122.25 00:11:00.118 [2024-11-28T11:41:30.245Z] =================================================================================================================== 00:11:00.118 [2024-11-28T11:41:30.245Z] Total : 9338.96 72.96 0.00 0.00 13658.56 1621.70 29122.25 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3232419 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:00.118 { 00:11:00.118 "params": { 00:11:00.118 "name": "Nvme$subsystem", 00:11:00.118 "trtype": "$TEST_TRANSPORT", 00:11:00.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.118 "adrfam": "ipv4", 00:11:00.118 "trsvcid": "$NVMF_PORT", 00:11:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.118 "hdgst": ${hdgst:-false}, 00:11:00.118 "ddgst": ${ddgst:-false} 00:11:00.118 }, 00:11:00.118 "method": "bdev_nvme_attach_controller" 00:11:00.118 } 00:11:00.118 EOF 00:11:00.118 )") 00:11:00.118 [2024-11-28 12:41:30.095071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.095103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:00.118 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:00.118 "params": { 00:11:00.118 "name": "Nvme1", 00:11:00.118 "trtype": "tcp", 00:11:00.118 "traddr": "10.0.0.2", 00:11:00.118 "adrfam": "ipv4", 00:11:00.118 "trsvcid": "4420", 00:11:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.118 "hdgst": false, 00:11:00.118 "ddgst": false 00:11:00.118 }, 00:11:00.118 "method": "bdev_nvme_attach_controller" 00:11:00.118 }' 00:11:00.118 [2024-11-28 12:41:30.107029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.107038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.119029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.119037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.131030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.131037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.140550] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:00.118 [2024-11-28 12:41:30.140606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232419 ] 00:11:00.118 [2024-11-28 12:41:30.143032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.143041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.155035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.155043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.167038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.167047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.179041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.179049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.118 [2024-11-28 12:41:30.191045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.118 [2024-11-28 12:41:30.191052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.119 [2024-11-28 12:41:30.203047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.119 [2024-11-28 12:41:30.203053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.119 [2024-11-28 12:41:30.215049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.119 [2024-11-28 12:41:30.215056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.119 [2024-11-28 12:41:30.227051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.119 [2024-11-28 12:41:30.227058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.119 [2024-11-28 12:41:30.239051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.119 [2024-11-28 12:41:30.239059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.251054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.251061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.263057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.263063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.273781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:00.379 [2024-11-28 12:41:30.275059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.275066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.287062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.287072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.299065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.299071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.311070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.311076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.323073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.323080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.329800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.379 [2024-11-28 12:41:30.335077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.335084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.345468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.379 [2024-11-28 12:41:30.347081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.347088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.359089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.359098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.371091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.371103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.383093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.383103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.395096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.395105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.407096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.407103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.419112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.419128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.431106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.431114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.443108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.443116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.379 [2024-11-28 12:41:30.455108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.379 [2024-11-28 12:41:30.455114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.380 [2024-11-28 12:41:30.467112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.380 [2024-11-28 12:41:30.467118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.380 [2024-11-28 12:41:30.479116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.380 [2024-11-28 12:41:30.479125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.380 [2024-11-28 12:41:30.491119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.380 [2024-11-28 12:41:30.491129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.380 [2024-11-28 12:41:30.503123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.380 [2024-11-28 12:41:30.503135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.515133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.515149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 Running I/O for 5 seconds... 00:11:00.640 [2024-11-28 12:41:30.527129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.527137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.542313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.542332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.555540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.555558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.568422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.568440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.581299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.581317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.594432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.594449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.607371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.607387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.620009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.620025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.633644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.633660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.647435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.647450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.661109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.661126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.674023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.674039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.687164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.687180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.700009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.700025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.713265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.713280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.725929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.725945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.738740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.738756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.751365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.751385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.640 [2024-11-28 12:41:30.764645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.640 [2024-11-28 12:41:30.764661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.777863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.777879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.791467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.791483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.804347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.804363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.817014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.817030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.829713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.829729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.842185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.842201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.855579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.855595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.868660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.868676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.881339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.881355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.894363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.894378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.907118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.907134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.920844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.920859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.934053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.934069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.947263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.947279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.960033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.960049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.973872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.973888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:30.987131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:30.987147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:31.000237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:31.000256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.900 [2024-11-28 12:41:31.013493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.900 [2024-11-28 12:41:31.013509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.026573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.026591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.039549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.039566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.052207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.052223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.064853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.064870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.077861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.077878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.090790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.090806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.104100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.104117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.117706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.117722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.131085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.131101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.144274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.144290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.157495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.157511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.170456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.170472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.183026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.183042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.196083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.196100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.209463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.209479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.222958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.222975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.236638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.236656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.249797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.249815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.263349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.263366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.160 [2024-11-28 12:41:31.276311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.160 [2024-11-28 12:41:31.276327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.289265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.289282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.302176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.302193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.315091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.315107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.328767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.328782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.341969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.341986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.354980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.354996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.367904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.367921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.381507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.381524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.394187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.394203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.407674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.407689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.421026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.421043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.434053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.434070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.447453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.447469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.460094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.460110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.473650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.473666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.486619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.486636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.499567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.499583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.512857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.512873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 18240.00 IOPS, 142.50 MiB/s [2024-11-28T11:41:31.547Z] [2024-11-28 12:41:31.525613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.525629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.420 [2024-11-28 12:41:31.538408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.420 [2024-11-28 12:41:31.538424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.552319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.552336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.565599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.565616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.579109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.579125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.591717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.591733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.604543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.604558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.617714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.617730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.631439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.631455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.644534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.644550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.658166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.658182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.671621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.671638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.685324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.685340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.698136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.698152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.710901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.710916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.723468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.723484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.736457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.736472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.749649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.749666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.763188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.763203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.776389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.776405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.789168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.789184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.681 [2024-11-28 12:41:31.801920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.681 [2024-11-28 12:41:31.801936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.941 [2024-11-28 12:41:31.815051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.941 [2024-11-28 12:41:31.815067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.828230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.828246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.841720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.841736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.855522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.855538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.868460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.868476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.882220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.882235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.893960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.893975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.906805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.906821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.920398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.920414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.933666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.933681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.947360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.947376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.960733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.960748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.973562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.973578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:31.987118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:31.987138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:32.000097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:32.000113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:32.012782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:32.012798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:32.025273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:32.025289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:32.037617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:32.037633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:32.050868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:32.050884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.942 [2024-11-28 12:41:32.063662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.942 [2024-11-28 12:41:32.063678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.076400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.076416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.089589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.089605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.102449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.102465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.116443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.116458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.129600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.129615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.143391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.143408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.155946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.155962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.169670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.169686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.183513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.183529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.197107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.197123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.210519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.210535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.224028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.224045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.237975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.237997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.250925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.250941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.263955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.263971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.277560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.277576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.290664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.290680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.304004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.304020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.203 [2024-11-28 12:41:32.316931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.203 [2024-11-28 12:41:32.316947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.330293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.330310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.343380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.343396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.356373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.356388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.369605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.369621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.383243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.383259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.396249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.396264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.409986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.410002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.422949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.422964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.435578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.435594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.448927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.448943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.462550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.462566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.475740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.475756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.488443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.488463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.501925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.501941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.515082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.515098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 18325.00 IOPS, 143.16 MiB/s [2024-11-28T11:41:32.590Z] [2024-11-28 12:41:32.527960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.527976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.541031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.541047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.553887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.553903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.567371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.463 [2024-11-28 12:41:32.567388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.463 [2024-11-28 12:41:32.580255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.464 [2024-11-28 12:41:32.580271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.593075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.593092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.606663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.606679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.620096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.620113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.632831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.632847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.646146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.646167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.659491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.659506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.672147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.672167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.685848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.685865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.698922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.698938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.712620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.712636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.725326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.725342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.738621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.738637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.752219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.752236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.764841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.764859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.777545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.777561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.791029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.791045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.803780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.803797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.816489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.816505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.830074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.830090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.842675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.842691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.736 [2024-11-28 12:41:32.856030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.736 [2024-11-28 12:41:32.856046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.868907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.868924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.882718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.882734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.896128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.896145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.909237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.909253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.922025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.922041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.935991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.936007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.948817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.948834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.961716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.961732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.974897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.974913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:32.988181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:32.988197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.001218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.001234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.013993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.014009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.027452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.027468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.041088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.041105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.054373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.054388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.067460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.067476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.080275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.080291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.093968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.093985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.106947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.106964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.998 [2024-11-28 12:41:33.119556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.998 [2024-11-28 12:41:33.119572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.132941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.132957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.146606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.146622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.160346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.160362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.173153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.173174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.186053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.186069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.199690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.199706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.212181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.212198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.225984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.226000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.238857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.238873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.251956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.251972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.265765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.265781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.278439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.278456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.290982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.290998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.303885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.303901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.317147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.317168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.329827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.329844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.343133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.343149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.355932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.355948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.369635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.369651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.259 [2024-11-28 12:41:33.382546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.259 [2024-11-28 12:41:33.382562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.395712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.395728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.409128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.409145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.421559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.421574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.434396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.434412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.447852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.447868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.461322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.461338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.475122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.475138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.488615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.488631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.501641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.501657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.514157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.514178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 18361.33 IOPS, 143.45 MiB/s [2024-11-28T11:41:33.648Z] [2024-11-28 12:41:33.527423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.527438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.540863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.540879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.554643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.554659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.567538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.567554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.580282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.580298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.593924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.593939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.607348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.607364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.620416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.620432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.521 [2024-11-28 12:41:33.634218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.521 [2024-11-28 12:41:33.634233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.647520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.647535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.660436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.660453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.673368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.673383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.686743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.686758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.699548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.699564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.712514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.712530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.726066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.726085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.739295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.739312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.752119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.752135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.765744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.765759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.778451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.778466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.791684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.791700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.804727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.804743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.817622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.817639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.830102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.830118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.842642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.842657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.855978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.855995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.869889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.869905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.882971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.882987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.782 [2024-11-28 12:41:33.896072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.782 [2024-11-28 12:41:33.896087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.908891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.908907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.921803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.921819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.934530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.934546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.948215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.948230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.961117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.961133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.974011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.974031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:33.987690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:33.987706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.000968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.000984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.013718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.013733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.026837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.026853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.040464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.040480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.054225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.054240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.067435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.067450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.081142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.081162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.093847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.093863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.107360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.107377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.120094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.120110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.132713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.132729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.146526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.043 [2024-11-28 12:41:34.146541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.043 [2024-11-28 12:41:34.160312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.044 [2024-11-28 12:41:34.160327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.172973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.172989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.186356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.186372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.199470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.199486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.212427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.212443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.225460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.225479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.239142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.239162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.252557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.252573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.266099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.266115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.279841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.279857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.293524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.293542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.306668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.306684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.319807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.319824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.333335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.333351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.347073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.304 [2024-11-28 12:41:34.347089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.304 [2024-11-28 12:41:34.360521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.305 [2024-11-28 12:41:34.360537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.305 [2024-11-28 12:41:34.374320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.305 [2024-11-28 12:41:34.374335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.305 [2024-11-28 12:41:34.388174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.305 [2024-11-28 12:41:34.388190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.305 [2024-11-28 12:41:34.401008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.305 [2024-11-28 12:41:34.401024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.305 [2024-11-28 12:41:34.414649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.305 [2024-11-28 12:41:34.414665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.305 [2024-11-28 12:41:34.427616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.305 [2024-11-28 12:41:34.427633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.440786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.440803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.454318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.454335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.468065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.468082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.480706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.480723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.494247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.494263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.507220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.507236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.520042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.520057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 18387.00 IOPS, 143.65 MiB/s [2024-11-28T11:41:34.693Z] [2024-11-28 12:41:34.533723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.533739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.546661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.546677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.559632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.559648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.572765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.572782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.586171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.586187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.599681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.599698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.613241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.613257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.626872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.626888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.640489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.640505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.653466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.653483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.666113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.666128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.679336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.679352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.566 [2024-11-28 12:41:34.692228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.566 [2024-11-28 12:41:34.692245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.706043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.706060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.719021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.719037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.731855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.731871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.744676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.744693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.757233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.757250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.770665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.770682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.783585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.783601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.796386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.796403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.810079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.810095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.822825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.822841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.835868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.835884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.848840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.848855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.862486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.862503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.876065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.876082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.889484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.889500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.902451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.902467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.915585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.915601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.928143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.928165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.940887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.940903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-28 12:41:34.953907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-28 12:41:34.953923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.089 [2024-11-28 12:41:34.966983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:34.967003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:34.979900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:34.979916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:34.993248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:34.993264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.006937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.006953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.019784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.019801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.033319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.033334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.045878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.045895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.058798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.058815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.071888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.071904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.085695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.085711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.098671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.098686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.111604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.111620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.125281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.125296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.138760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.138776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.152498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.152514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.165415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.165430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.178001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.178017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.190741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.190757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.090 [2024-11-28 12:41:35.203766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.090 [2024-11-28 12:41:35.203782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.217002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.217026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.230467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.230483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.243703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.243720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.256607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.256624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.270042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.270058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.283075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.283091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.296675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.296691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.309659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-28 12:41:35.309676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-28 12:41:35.323981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.323998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.336982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.336998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.350340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.350355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.363819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.363835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.377541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.377556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.390170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.390186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.403006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.403021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.416090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.416105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.428750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.428765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.441968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.441984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.455363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.455378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.350 [2024-11-28 12:41:35.468771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.350 [2024-11-28 12:41:35.468792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.481969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.481986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.495067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.495082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.507973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.507989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.520231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.520246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 18378.60 IOPS, 143.58 MiB/s 00:11:05.610 Latency(us) 00:11:05.610 [2024-11-28T11:41:35.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.610 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:05.610 Nvme1n1 : 5.01 18381.88 143.61 0.00 0.00 6957.46 3051.81 19487.82 00:11:05.610 [2024-11-28T11:41:35.737Z] =================================================================================================================== 00:11:05.610 [2024-11-28T11:41:35.737Z] Total : 18381.88 143.61 0.00 0.00 6957.46 3051.81 19487.82 00:11:05.610 [2024-11-28 12:41:35.530017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.530032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.542006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.542019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.554015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.554027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.566015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.566028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.578019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.578032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.590016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.590027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.602017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.602026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.614023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.614032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 [2024-11-28 12:41:35.626022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.610 [2024-11-28 12:41:35.626029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3232419) - No such process 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3232419 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.610 delay0 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.610 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:05.871 [2024-11-28 12:41:35.904330] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:14.004 Initializing NVMe Controllers 00:11:14.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:14.004 Initialization complete. Launching workers. 00:11:14.004 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 32759 00:11:14.005 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32880, failed to submit 121 00:11:14.005 success 32787, unsuccessful 93, failed 0 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.005 rmmod nvme_tcp 00:11:14.005 rmmod nvme_fabrics 00:11:14.005 rmmod nvme_keyring 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3230059 ']' 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3230059 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3230059 ']' 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3230059 00:11:14.005 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230059 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230059' 00:11:14.005 killing process with pid 3230059 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3230059 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3230059 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.005 12:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.387 00:11:15.387 real 0m34.710s 00:11:15.387 user 0m45.457s 00:11:15.387 sys 0m11.842s 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.387 ************************************ 00:11:15.387 END TEST nvmf_zcopy 00:11:15.387 ************************************ 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.387 ************************************ 00:11:15.387 START TEST nvmf_nmic 00:11:15.387 ************************************ 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:15.387 * Looking for test storage... 00:11:15.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.387 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.648 --rc genhtml_branch_coverage=1 00:11:15.648 --rc genhtml_function_coverage=1 00:11:15.648 --rc genhtml_legend=1 00:11:15.648 --rc geninfo_all_blocks=1 00:11:15.648 --rc geninfo_unexecuted_blocks=1 00:11:15.648 00:11:15.648 ' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.648 --rc genhtml_branch_coverage=1 00:11:15.648 --rc genhtml_function_coverage=1 00:11:15.648 --rc genhtml_legend=1 00:11:15.648 --rc geninfo_all_blocks=1 00:11:15.648 --rc geninfo_unexecuted_blocks=1 00:11:15.648 00:11:15.648 ' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.648 --rc genhtml_branch_coverage=1 00:11:15.648 --rc genhtml_function_coverage=1 00:11:15.648 --rc genhtml_legend=1 00:11:15.648 --rc geninfo_all_blocks=1 00:11:15.648 --rc geninfo_unexecuted_blocks=1 00:11:15.648 00:11:15.648 ' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.648 --rc genhtml_branch_coverage=1 00:11:15.648 --rc genhtml_function_coverage=1 00:11:15.648 --rc genhtml_legend=1 00:11:15.648 --rc geninfo_all_blocks=1 00:11:15.648 --rc geninfo_unexecuted_blocks=1 00:11:15.648 00:11:15.648 ' 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.648 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.649 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:23.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.789 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:23.790 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:23.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:23.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.790 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:11:23.790 00:11:23.790 --- 10.0.0.2 ping statistics --- 00:11:23.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.790 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:11:23.790 00:11:23.790 --- 10.0.0.1 ping statistics --- 00:11:23.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.790 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3239123 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3239123 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3239123 ']' 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.790 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.790 [2024-11-28 12:41:53.177120] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:23.790 [2024-11-28 12:41:53.177210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.790 [2024-11-28 12:41:53.321156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:23.790 [2024-11-28 12:41:53.381153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.790 [2024-11-28 12:41:53.410655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.790 [2024-11-28 12:41:53.410705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.790 [2024-11-28 12:41:53.410714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.790 [2024-11-28 12:41:53.410721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.790 [2024-11-28 12:41:53.410727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.790 [2024-11-28 12:41:53.412701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.790 [2024-11-28 12:41:53.412863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.790 [2024-11-28 12:41:53.413029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.790 [2024-11-28 12:41:53.413030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 [2024-11-28 12:41:54.056714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 Malloc0 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 [2024-11-28 12:41:54.135840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:24.081 test case1: single bdev can't be used in multiple subsystems 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 [2024-11-28 12:41:54.171576] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:24.081 [2024-11-28 12:41:54.171603] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:24.081 [2024-11-28 12:41:54.171612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.081 request: 00:11:24.081 { 00:11:24.081 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:24.081 "namespace": { 00:11:24.081 "bdev_name": "Malloc0", 00:11:24.081 "no_auto_visible": false, 00:11:24.081 "hide_metadata": false 00:11:24.081 }, 00:11:24.081 "method": "nvmf_subsystem_add_ns", 00:11:24.081 "req_id": 1 00:11:24.081 } 00:11:24.081 Got JSON-RPC error response 00:11:24.081 response: 00:11:24.081 { 00:11:24.081 "code": -32602, 00:11:24.081 "message": "Invalid parameters" 00:11:24.081 } 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:24.081 Adding namespace failed - expected result. 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:24.081 test case2: host connect to nvmf target in multiple paths 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.081 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:24.081 [2024-11-28 12:41:54.183758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:24.387 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.387 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.827 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:27.207 12:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.207 12:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.207 12:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.207 12:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.207 12:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:29.748 12:41:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:29.748 [global] 00:11:29.748 thread=1 00:11:29.748 invalidate=1 00:11:29.748 rw=write 00:11:29.748 time_based=1 00:11:29.748 runtime=1 00:11:29.748 ioengine=libaio 00:11:29.748 direct=1 00:11:29.748 bs=4096 00:11:29.748 iodepth=1 00:11:29.748 norandommap=0 00:11:29.748 numjobs=1 00:11:29.748 00:11:29.748 verify_dump=1 00:11:29.748 verify_backlog=512 00:11:29.748 verify_state_save=0 00:11:29.748 do_verify=1 00:11:29.748 verify=crc32c-intel 00:11:29.748 [job0] 00:11:29.748 filename=/dev/nvme0n1 00:11:29.748 Could not set queue depth (nvme0n1) 00:11:29.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.748 fio-3.35 00:11:29.748 Starting 1 thread 00:11:31.132 00:11:31.132 job0: (groupid=0, jobs=1): err= 0: pid=3240671: Thu Nov 28 12:42:00 2024 00:11:31.132 read: IOPS=16, BW=65.5KiB/s (67.1kB/s)(68.0KiB/1038msec) 00:11:31.132 slat (nsec): min=25975, max=28267, avg=26744.35, stdev=544.39 00:11:31.132 clat (usec): min=1157, max=42109, avg=39427.08, stdev=9866.77 00:11:31.132 lat (usec): min=1183, max=42136, avg=39453.82, stdev=9866.91 00:11:31.132 clat percentiles (usec): 00:11:31.132 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41157], 20.00th=[41681], 00:11:31.132 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:31.132 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:31.132 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:31.132 | 99.99th=[42206] 00:11:31.132 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:11:31.132 slat (usec): min=10, max=25784, avg=80.85, stdev=1138.22 00:11:31.132 clat (usec): min=197, max=950, avg=629.59, stdev=144.99 00:11:31.132 lat (usec): min=209, max=26436, avg=710.44, stdev=1148.77 00:11:31.132 clat percentiles (usec): 00:11:31.132 | 1.00th=[ 260], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 498], 00:11:31.132 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 652], 60.00th=[ 701], 00:11:31.132 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 791], 95.00th=[ 816], 00:11:31.132 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 955], 00:11:31.132 | 99.99th=[ 955] 00:11:31.132 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:31.132 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:31.132 lat (usec) : 250=0.57%, 500=19.09%, 750=55.20%, 1000=21.93% 00:11:31.132 lat (msec) : 2=0.19%, 50=3.02% 00:11:31.132 cpu : usr=0.77%, sys=1.45%, ctx=532, majf=0, minf=1 00:11:31.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.132 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.132 00:11:31.132 Run status group 0 (all jobs): 00:11:31.132 READ: bw=65.5KiB/s (67.1kB/s), 65.5KiB/s-65.5KiB/s (67.1kB/s-67.1kB/s), io=68.0KiB (69.6kB), run=1038-1038msec 00:11:31.132 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:11:31.132 00:11:31.132 Disk stats (read/write): 00:11:31.132 nvme0n1: ios=38/512, merge=0/0, ticks=1463/309, in_queue=1772, util=98.60% 00:11:31.132 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.132 rmmod nvme_tcp 00:11:31.132 rmmod nvme_fabrics 00:11:31.132 rmmod nvme_keyring 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3239123 ']' 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3239123 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3239123 ']' 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3239123 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3239123 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3239123' 00:11:31.132 killing process with pid 3239123 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3239123 00:11:31.132 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3239123 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.406 12:42:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.323 00:11:33.323 real 0m18.029s 00:11:33.323 user 0m48.439s 00:11:33.323 sys 0m6.542s 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.323 ************************************ 00:11:33.323 END TEST nvmf_nmic 00:11:33.323 ************************************ 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.323 12:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.584 ************************************ 00:11:33.584 START TEST nvmf_fio_target 00:11:33.584 ************************************ 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:33.584 * Looking for test storage... 00:11:33.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.584 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.585 --rc genhtml_branch_coverage=1 00:11:33.585 --rc genhtml_function_coverage=1 00:11:33.585 --rc genhtml_legend=1 00:11:33.585 --rc geninfo_all_blocks=1 00:11:33.585 --rc geninfo_unexecuted_blocks=1 00:11:33.585 00:11:33.585 ' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.585 --rc genhtml_branch_coverage=1 00:11:33.585 --rc genhtml_function_coverage=1 00:11:33.585 --rc genhtml_legend=1 00:11:33.585 --rc geninfo_all_blocks=1 00:11:33.585 --rc geninfo_unexecuted_blocks=1 00:11:33.585 00:11:33.585 ' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.585 --rc genhtml_branch_coverage=1 00:11:33.585 --rc genhtml_function_coverage=1 00:11:33.585 --rc genhtml_legend=1 00:11:33.585 --rc geninfo_all_blocks=1 00:11:33.585 --rc geninfo_unexecuted_blocks=1 00:11:33.585 00:11:33.585 ' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.585 --rc genhtml_branch_coverage=1 00:11:33.585 --rc genhtml_function_coverage=1 00:11:33.585 --rc genhtml_legend=1 00:11:33.585 --rc geninfo_all_blocks=1 00:11:33.585 --rc geninfo_unexecuted_blocks=1 00:11:33.585 00:11:33.585 ' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.585 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.586 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:41.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:41.728 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:41.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:41.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.728 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.729 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:11:41.729 00:11:41.729 --- 10.0.0.2 ping statistics --- 00:11:41.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.729 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:11:41.729 00:11:41.729 --- 10.0.0.1 ping statistics --- 00:11:41.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.729 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3245726 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3245726 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3245726 ']' 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.729 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.729 [2024-11-28 12:42:11.356977] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:41.729 [2024-11-28 12:42:11.357046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.729 [2024-11-28 12:42:11.502542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:41.729 [2024-11-28 12:42:11.562285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.729 [2024-11-28 12:42:11.590550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.729 [2024-11-28 12:42:11.590591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.729 [2024-11-28 12:42:11.590604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.729 [2024-11-28 12:42:11.590611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.729 [2024-11-28 12:42:11.590617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.729 [2024-11-28 12:42:11.592731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.729 [2024-11-28 12:42:11.592889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.729 [2024-11-28 12:42:11.593049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.729 [2024-11-28 12:42:11.593050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:42.303 [2024-11-28 12:42:12.377464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.303 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.564 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:42.564 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.826 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:42.826 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.088 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:43.088 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.349 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:43.349 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:43.610 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.610 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:43.610 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.869 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:43.869 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:44.129 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:44.129 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:44.390 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.390 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:44.390 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.652 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:44.652 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.913 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.913 [2024-11-28 12:42:14.949916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.913 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:45.176 12:42:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:45.436 12:42:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.819 12:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:46.819 12:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.819 12:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.819 12:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:46.819 12:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:46.819 12:42:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.735 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.735 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.735 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.735 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:48.996 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.996 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:48.996 12:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:48.996 [global] 00:11:48.996 thread=1 00:11:48.996 invalidate=1 00:11:48.996 rw=write 00:11:48.996 time_based=1 00:11:48.996 runtime=1 00:11:48.996 ioengine=libaio 00:11:48.996 direct=1 00:11:48.996 bs=4096 00:11:48.996 iodepth=1 00:11:48.996 norandommap=0 00:11:48.996 numjobs=1 00:11:48.996 00:11:48.996 verify_dump=1 00:11:48.996 verify_backlog=512 00:11:48.996 verify_state_save=0 00:11:48.996 do_verify=1 00:11:48.996 verify=crc32c-intel 00:11:48.996 [job0] 00:11:48.996 filename=/dev/nvme0n1 00:11:48.996 [job1] 00:11:48.996 filename=/dev/nvme0n2 00:11:48.996 [job2] 00:11:48.996 filename=/dev/nvme0n3 00:11:48.996 [job3] 00:11:48.996 filename=/dev/nvme0n4 00:11:48.996 Could not set queue depth (nvme0n1) 00:11:48.996 Could not set queue depth (nvme0n2) 00:11:48.996 Could not set queue depth (nvme0n3) 00:11:48.996 Could not set queue depth (nvme0n4) 00:11:49.258 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:49.258 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:49.258 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:49.258 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:49.258 fio-3.35 00:11:49.258 Starting 4 threads 00:11:50.645 00:11:50.645 job0: (groupid=0, jobs=1): err= 0: pid=3247519: Thu Nov 28 12:42:20 2024 00:11:50.645 read: IOPS=704, BW=2817KiB/s (2885kB/s)(2820KiB/1001msec) 00:11:50.645 slat (nsec): min=7107, max=57531, avg=23625.99, stdev=8084.12 00:11:50.645 clat (usec): min=230, max=1099, avg=741.98, stdev=96.15 00:11:50.645 lat (usec): min=242, max=1118, avg=765.60, stdev=98.04 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 396], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[ 676], 00:11:50.645 | 30.00th=[ 709], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 783], 00:11:50.645 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 857], 00:11:50.645 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 1106], 99.95th=[ 1106], 00:11:50.645 | 99.99th=[ 1106] 00:11:50.645 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:50.645 slat (nsec): min=9818, max=55378, avg=27902.69, stdev=11522.82 00:11:50.645 clat (usec): min=101, max=770, avg=410.35, stdev=84.86 00:11:50.645 lat (usec): min=112, max=780, avg=438.25, stdev=90.40 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 167], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 330], 00:11:50.645 | 30.00th=[ 359], 40.00th=[ 408], 50.00th=[ 437], 60.00th=[ 453], 00:11:50.645 | 70.00th=[ 465], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 519], 00:11:50.645 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 660], 99.95th=[ 775], 00:11:50.645 | 99.99th=[ 775] 00:11:50.645 bw ( KiB/s): min= 4096, max= 4096, per=29.61%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.645 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.645 lat (usec) : 250=2.14%, 500=52.46%, 750=21.86%, 1000=23.48% 00:11:50.645 lat (msec) : 2=0.06% 00:11:50.645 cpu : usr=2.40%, sys=4.60%, ctx=1730, majf=0, minf=1 00:11:50.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.645 issued rwts: total=705,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.645 job1: (groupid=0, jobs=1): err= 0: pid=3247520: Thu Nov 28 12:42:20 2024 00:11:50.645 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:50.645 slat (nsec): min=7102, max=62754, avg=25721.19, stdev=6582.54 00:11:50.645 clat (usec): min=151, max=804, avg=564.91, stdev=110.85 00:11:50.645 lat (usec): min=159, max=847, avg=590.63, stdev=111.51 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 289], 5.00th=[ 375], 10.00th=[ 404], 20.00th=[ 437], 00:11:50.645 | 30.00th=[ 523], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 627], 00:11:50.645 | 70.00th=[ 644], 80.00th=[ 652], 90.00th=[ 676], 95.00th=[ 693], 00:11:50.645 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 766], 99.95th=[ 807], 00:11:50.645 | 99.99th=[ 807] 00:11:50.645 write: IOPS=1254, BW=5019KiB/s (5139kB/s)(5024KiB/1001msec); 0 zone resets 00:11:50.645 slat (nsec): min=9632, max=64942, avg=24174.78, stdev=12496.56 00:11:50.645 clat (usec): min=90, max=1086, avg=278.04, stdev=96.94 00:11:50.645 lat (usec): min=100, max=1123, avg=302.22, stdev=100.68 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 100], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 208], 00:11:50.645 | 30.00th=[ 260], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:11:50.645 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 375], 95.00th=[ 412], 00:11:50.645 | 99.00th=[ 519], 99.50th=[ 652], 99.90th=[ 955], 99.95th=[ 1090], 00:11:50.645 | 99.99th=[ 1090] 00:11:50.645 bw ( KiB/s): min= 5232, max= 5232, per=37.82%, avg=5232.00, stdev= 0.00, samples=1 00:11:50.645 iops : min= 1308, max= 1308, avg=1308.00, stdev= 0.00, samples=1 00:11:50.645 lat (usec) : 100=0.48%, 250=15.35%, 500=51.14%, 750=32.59%, 1000=0.39% 00:11:50.645 lat (msec) : 2=0.04% 00:11:50.645 cpu : usr=2.40%, sys=6.50%, ctx=2281, majf=0, minf=1 00:11:50.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.645 issued rwts: total=1024,1256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.645 job2: (groupid=0, jobs=1): err= 0: pid=3247521: Thu Nov 28 12:42:20 2024 00:11:50.645 read: IOPS=200, BW=800KiB/s (819kB/s)(824KiB/1030msec) 00:11:50.645 slat (nsec): min=2639, max=44727, avg=21774.02, stdev=9744.61 00:11:50.645 clat (usec): min=276, max=42009, avg=3797.75, stdev=10473.67 00:11:50.645 lat (usec): min=303, max=42041, avg=3819.53, stdev=10474.45 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 523], 5.00th=[ 603], 10.00th=[ 685], 20.00th=[ 750], 00:11:50.645 | 30.00th=[ 791], 40.00th=[ 848], 50.00th=[ 906], 60.00th=[ 938], 00:11:50.645 | 70.00th=[ 988], 80.00th=[ 1020], 90.00th=[ 1139], 95.00th=[41157], 00:11:50.645 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:50.645 | 99.99th=[42206] 00:11:50.645 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:50.645 slat (nsec): min=10152, max=59842, avg=30025.19, stdev=10897.13 00:11:50.645 clat (usec): min=227, max=678, avg=434.30, stdev=94.72 00:11:50.645 lat (usec): min=240, max=714, avg=464.33, stdev=100.73 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 255], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 343], 00:11:50.645 | 30.00th=[ 375], 40.00th=[ 416], 50.00th=[ 441], 60.00th=[ 461], 00:11:50.645 | 70.00th=[ 486], 80.00th=[ 515], 90.00th=[ 562], 95.00th=[ 594], 00:11:50.645 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 676], 99.95th=[ 676], 00:11:50.645 | 99.99th=[ 676] 00:11:50.645 bw ( KiB/s): min= 4096, max= 4096, per=29.61%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.645 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.645 lat (usec) : 250=0.56%, 500=53.48%, 750=23.40%, 1000=15.60% 00:11:50.645 lat (msec) : 2=4.87%, 50=2.09% 00:11:50.645 cpu : usr=0.58%, sys=2.33%, ctx=719, majf=0, minf=1 00:11:50.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.645 issued rwts: total=206,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.645 job3: (groupid=0, jobs=1): err= 0: pid=3247523: Thu Nov 28 12:42:20 2024 00:11:50.645 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:50.645 slat (nsec): min=8192, max=47184, avg=27011.55, stdev=3206.85 00:11:50.645 clat (usec): min=577, max=1601, avg=1004.86, stdev=108.33 00:11:50.645 lat (usec): min=604, max=1628, avg=1031.87, stdev=108.42 00:11:50.645 clat percentiles (usec): 00:11:50.645 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 930], 00:11:50.645 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:11:50.645 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:11:50.645 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1598], 99.95th=[ 1598], 00:11:50.645 | 99.99th=[ 1598] 00:11:50.646 write: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec); 0 zone resets 00:11:50.646 slat (nsec): min=10394, max=61145, avg=30995.41, stdev=9992.26 00:11:50.646 clat (usec): min=172, max=1010, avg=568.65, stdev=123.24 00:11:50.646 lat (usec): min=206, max=1045, avg=599.64, stdev=126.92 00:11:50.646 clat percentiles (usec): 00:11:50.646 | 1.00th=[ 253], 5.00th=[ 355], 10.00th=[ 400], 20.00th=[ 469], 00:11:50.646 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 603], 00:11:50.646 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:11:50.646 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 1012], 99.95th=[ 1012], 00:11:50.646 | 99.99th=[ 1012] 00:11:50.646 bw ( KiB/s): min= 4096, max= 4096, per=29.61%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.646 lat (usec) : 250=0.47%, 500=16.07%, 750=40.95%, 1000=17.63% 00:11:50.646 lat (msec) : 2=24.88% 00:11:50.646 cpu : usr=2.20%, sys=3.50%, ctx=1283, majf=0, minf=1 00:11:50.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.646 issued rwts: total=512,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.646 00:11:50.646 Run status group 0 (all jobs): 00:11:50.646 READ: bw=9503KiB/s (9731kB/s), 800KiB/s-4092KiB/s (819kB/s-4190kB/s), io=9788KiB (10.0MB), run=1001-1030msec 00:11:50.646 WRITE: bw=13.5MiB/s (14.2MB/s), 1988KiB/s-5019KiB/s (2036kB/s-5139kB/s), io=13.9MiB (14.6MB), run=1001-1030msec 00:11:50.646 00:11:50.646 Disk stats (read/write): 00:11:50.646 nvme0n1: ios=564/987, merge=0/0, ticks=1166/391, in_queue=1557, util=96.49% 00:11:50.646 nvme0n2: ios=917/1024, merge=0/0, ticks=990/277, in_queue=1267, util=96.93% 00:11:50.646 nvme0n3: ios=229/512, merge=0/0, ticks=1486/219, in_queue=1705, util=96.83% 00:11:50.646 nvme0n4: ios=564/512, merge=0/0, ticks=1285/279, in_queue=1564, util=96.47% 00:11:50.646 12:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:50.646 [global] 00:11:50.646 thread=1 00:11:50.646 invalidate=1 00:11:50.646 rw=randwrite 00:11:50.646 time_based=1 00:11:50.646 runtime=1 00:11:50.646 ioengine=libaio 00:11:50.646 direct=1 00:11:50.646 bs=4096 00:11:50.646 iodepth=1 00:11:50.646 norandommap=0 00:11:50.646 numjobs=1 00:11:50.646 00:11:50.646 verify_dump=1 00:11:50.646 verify_backlog=512 00:11:50.646 verify_state_save=0 00:11:50.646 do_verify=1 00:11:50.646 verify=crc32c-intel 00:11:50.646 [job0] 00:11:50.646 filename=/dev/nvme0n1 00:11:50.646 [job1] 00:11:50.646 filename=/dev/nvme0n2 00:11:50.646 [job2] 00:11:50.646 filename=/dev/nvme0n3 00:11:50.646 [job3] 00:11:50.646 filename=/dev/nvme0n4 00:11:50.646 Could not set queue depth (nvme0n1) 00:11:50.646 Could not set queue depth (nvme0n2) 00:11:50.646 Could not set queue depth (nvme0n3) 00:11:50.646 Could not set queue depth (nvme0n4) 00:11:50.907 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.907 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.907 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.907 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.907 fio-3.35 00:11:50.907 Starting 4 threads 00:11:52.297 00:11:52.297 job0: (groupid=0, jobs=1): err= 0: pid=3248045: Thu Nov 28 12:42:22 2024 00:11:52.297 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:52.297 slat (nsec): min=6355, max=44997, avg=25777.58, stdev=4727.91 00:11:52.297 clat (usec): min=499, max=1270, avg=971.81, stdev=111.78 00:11:52.297 lat (usec): min=506, max=1296, avg=997.59, stdev=113.84 00:11:52.297 clat percentiles (usec): 00:11:52.297 | 1.00th=[ 562], 5.00th=[ 742], 10.00th=[ 824], 20.00th=[ 906], 00:11:52.297 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:11:52.297 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:11:52.297 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1270], 99.95th=[ 1270], 00:11:52.297 | 99.99th=[ 1270] 00:11:52.297 write: IOPS=758, BW=3033KiB/s (3106kB/s)(3036KiB/1001msec); 0 zone resets 00:11:52.297 slat (nsec): min=8752, max=52674, avg=28757.07, stdev=9478.50 00:11:52.297 clat (usec): min=282, max=943, avg=603.52, stdev=109.94 00:11:52.297 lat (usec): min=292, max=975, avg=632.28, stdev=114.64 00:11:52.297 clat percentiles (usec): 00:11:52.297 | 1.00th=[ 334], 5.00th=[ 396], 10.00th=[ 449], 20.00th=[ 515], 00:11:52.297 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:11:52.297 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:11:52.297 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 947], 99.95th=[ 947], 00:11:52.297 | 99.99th=[ 947] 00:11:52.297 bw ( KiB/s): min= 4096, max= 4096, per=34.81%, avg=4096.00, stdev= 0.00, samples=1 00:11:52.297 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:52.297 lat (usec) : 500=11.09%, 750=45.87%, 1000=24.23% 00:11:52.297 lat (msec) : 2=18.80% 00:11:52.297 cpu : usr=2.40%, sys=5.00%, ctx=1272, majf=0, minf=1 00:11:52.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.297 issued rwts: total=512,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.297 job1: (groupid=0, jobs=1): err= 0: pid=3248046: Thu Nov 28 12:42:22 2024 00:11:52.297 read: IOPS=173, BW=694KiB/s (711kB/s)(712KiB/1026msec) 00:11:52.297 slat (nsec): min=6362, max=60066, avg=24328.42, stdev=7745.19 00:11:52.297 clat (usec): min=504, max=42059, avg=3892.13, stdev=10587.66 00:11:52.297 lat (usec): min=531, max=42089, avg=3916.46, stdev=10588.64 00:11:52.297 clat percentiles (usec): 00:11:52.297 | 1.00th=[ 523], 5.00th=[ 652], 10.00th=[ 734], 20.00th=[ 832], 00:11:52.297 | 30.00th=[ 881], 40.00th=[ 914], 50.00th=[ 963], 60.00th=[ 1004], 00:11:52.297 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[41157], 00:11:52.297 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:52.297 | 99.99th=[42206] 00:11:52.297 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:11:52.297 slat (nsec): min=8957, max=51135, avg=30834.58, stdev=8190.81 00:11:52.297 clat (usec): min=265, max=987, avg=599.78, stdev=128.49 00:11:52.297 lat (usec): min=275, max=1022, avg=630.62, stdev=130.99 00:11:52.297 clat percentiles (usec): 00:11:52.297 | 1.00th=[ 306], 5.00th=[ 379], 10.00th=[ 424], 20.00th=[ 494], 00:11:52.297 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 635], 00:11:52.297 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 807], 00:11:52.297 | 99.00th=[ 881], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:11:52.297 | 99.99th=[ 988] 00:11:52.297 bw ( KiB/s): min= 4096, max= 4096, per=34.81%, avg=4096.00, stdev= 0.00, samples=1 00:11:52.297 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:52.297 lat (usec) : 500=15.94%, 750=52.17%, 1000=21.45% 00:11:52.297 lat (msec) : 2=8.55%, 50=1.88% 00:11:52.297 cpu : usr=1.27%, sys=2.73%, ctx=690, majf=0, minf=1 00:11:52.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.297 issued rwts: total=178,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.297 job2: (groupid=0, jobs=1): err= 0: pid=3248047: Thu Nov 28 12:42:22 2024 00:11:52.297 read: IOPS=539, BW=2158KiB/s (2210kB/s)(2160KiB/1001msec) 00:11:52.297 slat (nsec): min=5408, max=49693, avg=15420.44, stdev=10007.73 00:11:52.297 clat (usec): min=452, max=1151, avg=823.60, stdev=154.45 00:11:52.297 lat (usec): min=459, max=1177, avg=839.02, stdev=160.84 00:11:52.297 clat percentiles (usec): 00:11:52.297 | 1.00th=[ 482], 5.00th=[ 578], 10.00th=[ 619], 20.00th=[ 685], 00:11:52.298 | 30.00th=[ 725], 40.00th=[ 775], 50.00th=[ 816], 60.00th=[ 865], 00:11:52.298 | 70.00th=[ 914], 80.00th=[ 971], 90.00th=[ 1037], 95.00th=[ 1074], 00:11:52.298 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1156], 99.95th=[ 1156], 00:11:52.298 | 99.99th=[ 1156] 00:11:52.298 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:52.298 slat (nsec): min=5957, max=66381, avg=19089.92, stdev=13251.95 00:11:52.298 clat (usec): min=169, max=907, avg=508.39, stdev=145.59 00:11:52.298 lat (usec): min=176, max=940, avg=527.48, stdev=153.46 00:11:52.298 clat percentiles (usec): 00:11:52.298 | 1.00th=[ 217], 5.00th=[ 281], 10.00th=[ 314], 20.00th=[ 383], 00:11:52.298 | 30.00th=[ 416], 40.00th=[ 457], 50.00th=[ 502], 60.00th=[ 553], 00:11:52.298 | 70.00th=[ 586], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 750], 00:11:52.298 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 898], 99.95th=[ 906], 00:11:52.298 | 99.99th=[ 906] 00:11:52.298 bw ( KiB/s): min= 4096, max= 4096, per=34.81%, avg=4096.00, stdev= 0.00, samples=1 00:11:52.298 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:52.298 lat (usec) : 250=1.21%, 500=32.03%, 750=41.11%, 1000=20.01% 00:11:52.298 lat (msec) : 2=5.63% 00:11:52.298 cpu : usr=2.20%, sys=3.40%, ctx=1564, majf=0, minf=1 00:11:52.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.298 issued rwts: total=540,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.298 job3: (groupid=0, jobs=1): err= 0: pid=3248048: Thu Nov 28 12:42:22 2024 00:11:52.298 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:52.298 slat (nsec): min=6843, max=62464, avg=26145.86, stdev=4170.66 00:11:52.298 clat (usec): min=654, max=1406, avg=1007.29, stdev=115.41 00:11:52.298 lat (usec): min=662, max=1432, avg=1033.44, stdev=116.55 00:11:52.298 clat percentiles (usec): 00:11:52.298 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 914], 00:11:52.298 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1045], 00:11:52.298 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188], 00:11:52.298 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1401], 99.95th=[ 1401], 00:11:52.298 | 99.99th=[ 1401] 00:11:52.298 write: IOPS=722, BW=2889KiB/s (2958kB/s)(2892KiB/1001msec); 0 zone resets 00:11:52.298 slat (nsec): min=9839, max=66391, avg=30894.91, stdev=8535.21 00:11:52.298 clat (usec): min=228, max=1090, avg=606.97, stdev=136.54 00:11:52.298 lat (usec): min=241, max=1124, avg=637.86, stdev=139.22 00:11:52.298 clat percentiles (usec): 00:11:52.298 | 1.00th=[ 322], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 486], 00:11:52.298 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:11:52.298 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 791], 95.00th=[ 840], 00:11:52.298 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1090], 99.95th=[ 1090], 00:11:52.298 | 99.99th=[ 1090] 00:11:52.298 bw ( KiB/s): min= 4096, max= 4096, per=34.81%, avg=4096.00, stdev= 0.00, samples=1 00:11:52.298 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:52.298 lat (usec) : 250=0.08%, 500=14.25%, 750=35.79%, 1000=26.40% 00:11:52.298 lat (msec) : 2=23.48% 00:11:52.298 cpu : usr=1.80%, sys=3.80%, ctx=1237, majf=0, minf=1 00:11:52.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.298 issued rwts: total=512,723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.298 00:11:52.298 Run status group 0 (all jobs): 00:11:52.298 READ: bw=6791KiB/s (6954kB/s), 694KiB/s-2158KiB/s (711kB/s-2210kB/s), io=6968KiB (7135kB), run=1001-1026msec 00:11:52.298 WRITE: bw=11.5MiB/s (12.0MB/s), 1996KiB/s-4092KiB/s (2044kB/s-4190kB/s), io=11.8MiB (12.4MB), run=1001-1026msec 00:11:52.298 00:11:52.298 Disk stats (read/write): 00:11:52.298 nvme0n1: ios=555/512, merge=0/0, ticks=526/241, in_queue=767, util=87.78% 00:11:52.298 nvme0n2: ios=123/512, merge=0/0, ticks=613/255, in_queue=868, util=90.21% 00:11:52.298 nvme0n3: ios=512/688, merge=0/0, ticks=409/292, in_queue=701, util=88.41% 00:11:52.298 nvme0n4: ios=504/512, merge=0/0, ticks=1392/304, in_queue=1696, util=96.80% 00:11:52.298 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:52.298 [global] 00:11:52.298 thread=1 00:11:52.298 invalidate=1 00:11:52.298 rw=write 00:11:52.298 time_based=1 00:11:52.298 runtime=1 00:11:52.298 ioengine=libaio 00:11:52.298 direct=1 00:11:52.298 bs=4096 00:11:52.298 iodepth=128 00:11:52.298 norandommap=0 00:11:52.298 numjobs=1 00:11:52.298 00:11:52.298 verify_dump=1 00:11:52.298 verify_backlog=512 00:11:52.298 verify_state_save=0 00:11:52.298 do_verify=1 00:11:52.298 verify=crc32c-intel 00:11:52.298 [job0] 00:11:52.298 filename=/dev/nvme0n1 00:11:52.298 [job1] 00:11:52.298 filename=/dev/nvme0n2 00:11:52.298 [job2] 00:11:52.298 filename=/dev/nvme0n3 00:11:52.298 [job3] 00:11:52.298 filename=/dev/nvme0n4 00:11:52.298 Could not set queue depth (nvme0n1) 00:11:52.298 Could not set queue depth (nvme0n2) 00:11:52.298 Could not set queue depth (nvme0n3) 00:11:52.298 Could not set queue depth (nvme0n4) 00:11:52.559 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.559 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.559 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.559 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.559 fio-3.35 00:11:52.559 Starting 4 threads 00:11:53.945 00:11:53.945 job0: (groupid=0, jobs=1): err= 0: pid=3248568: Thu Nov 28 12:42:23 2024 00:11:53.945 read: IOPS=8525, BW=33.3MiB/s (34.9MB/s)(33.4MiB/1003msec) 00:11:53.945 slat (nsec): min=904, max=10250k, avg=54167.22, stdev=408456.56 00:11:53.945 clat (usec): min=1958, max=26021, avg=7315.20, stdev=2506.45 00:11:53.945 lat (usec): min=1960, max=26048, avg=7369.37, stdev=2530.11 00:11:53.945 clat percentiles (usec): 00:11:53.945 | 1.00th=[ 3195], 5.00th=[ 4424], 10.00th=[ 5145], 20.00th=[ 5604], 00:11:53.945 | 30.00th=[ 5932], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7177], 00:11:53.945 | 70.00th=[ 7701], 80.00th=[ 9110], 90.00th=[10552], 95.00th=[11338], 00:11:53.945 | 99.00th=[15795], 99.50th=[16581], 99.90th=[20579], 99.95th=[20579], 00:11:53.945 | 99.99th=[26084] 00:11:53.945 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:11:53.945 slat (nsec): min=1600, max=7585.9k, avg=54324.61, stdev=329300.13 00:11:53.945 clat (usec): min=683, max=38392, avg=7415.45, stdev=5370.83 00:11:53.945 lat (usec): min=823, max=38405, avg=7469.77, stdev=5403.30 00:11:53.945 clat percentiles (usec): 00:11:53.945 | 1.00th=[ 1958], 5.00th=[ 3130], 10.00th=[ 3589], 20.00th=[ 4621], 00:11:53.945 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6063], 00:11:53.945 | 70.00th=[ 6652], 80.00th=[ 7308], 90.00th=[12911], 95.00th=[22152], 00:11:53.945 | 99.00th=[27657], 99.50th=[31327], 99.90th=[35914], 99.95th=[38536], 00:11:53.945 | 99.99th=[38536] 00:11:53.945 bw ( KiB/s): min=28672, max=40960, per=38.91%, avg=34816.00, stdev=8688.93, samples=2 00:11:53.945 iops : min= 7168, max=10240, avg=8704.00, stdev=2172.23, samples=2 00:11:53.945 lat (usec) : 750=0.01%, 1000=0.13% 00:11:53.945 lat (msec) : 2=0.45%, 4=7.45%, 10=78.44%, 20=10.00%, 50=3.53% 00:11:53.945 cpu : usr=4.79%, sys=8.38%, ctx=762, majf=0, minf=1 00:11:53.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:53.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.945 issued rwts: total=8551,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.945 job1: (groupid=0, jobs=1): err= 0: pid=3248569: Thu Nov 28 12:42:23 2024 00:11:53.945 read: IOPS=4839, BW=18.9MiB/s (19.8MB/s)(19.8MiB/1046msec) 00:11:53.945 slat (nsec): min=978, max=22069k, avg=109395.43, stdev=871577.34 00:11:53.945 clat (usec): min=3117, max=56679, avg=15561.05, stdev=10855.06 00:11:53.945 lat (usec): min=3124, max=59388, avg=15670.45, stdev=10929.81 00:11:53.945 clat percentiles (usec): 00:11:53.945 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7767], 00:11:53.945 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10814], 60.00th=[12911], 00:11:53.945 | 70.00th=[15008], 80.00th=[22938], 90.00th=[32113], 95.00th=[39584], 00:11:53.945 | 99.00th=[52167], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:11:53.945 | 99.99th=[56886] 00:11:53.945 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:11:53.945 slat (nsec): min=1645, max=13183k, avg=76659.37, stdev=586792.75 00:11:53.945 clat (usec): min=1265, max=35648, avg=10484.65, stdev=5076.24 00:11:53.945 lat (usec): min=1276, max=35657, avg=10561.31, stdev=5134.48 00:11:53.945 clat percentiles (usec): 00:11:53.945 | 1.00th=[ 3621], 5.00th=[ 4047], 10.00th=[ 5604], 20.00th=[ 7046], 00:11:53.945 | 30.00th=[ 7439], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[10421], 00:11:53.945 | 70.00th=[11338], 80.00th=[13042], 90.00th=[16909], 95.00th=[20579], 00:11:53.945 | 99.00th=[26870], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:11:53.945 | 99.99th=[35390] 00:11:53.945 bw ( KiB/s): min=16392, max=24568, per=22.89%, avg=20480.00, stdev=5781.31, samples=2 00:11:53.945 iops : min= 4098, max= 6142, avg=5120.00, stdev=1445.33, samples=2 00:11:53.945 lat (msec) : 2=0.09%, 4=2.47%, 10=47.40%, 20=33.90%, 50=15.31% 00:11:53.945 lat (msec) : 100=0.82% 00:11:53.945 cpu : usr=4.59%, sys=5.17%, ctx=281, majf=0, minf=1 00:11:53.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:53.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.945 issued rwts: total=5062,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.945 job2: (groupid=0, jobs=1): err= 0: pid=3248571: Thu Nov 28 12:42:23 2024 00:11:53.945 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:11:53.945 slat (nsec): min=937, max=14819k, avg=116544.14, stdev=903001.46 00:11:53.945 clat (usec): min=2555, max=49727, avg=15255.37, stdev=9099.27 00:11:53.945 lat (usec): min=2559, max=49754, avg=15371.91, stdev=9189.06 00:11:53.945 clat percentiles (usec): 00:11:53.945 | 1.00th=[ 4752], 5.00th=[ 6194], 10.00th=[ 7439], 20.00th=[ 8455], 00:11:53.945 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[11469], 60.00th=[14222], 00:11:53.945 | 70.00th=[16712], 80.00th=[22414], 90.00th=[30278], 95.00th=[34341], 00:11:53.945 | 99.00th=[39584], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:11:53.945 | 99.99th=[49546] 00:11:53.945 write: IOPS=4646, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1007msec); 0 zone resets 00:11:53.945 slat (nsec): min=1574, max=12862k, avg=90632.46, stdev=682857.01 00:11:53.945 clat (usec): min=1207, max=50300, avg=12269.77, stdev=7550.47 00:11:53.945 lat (usec): min=1220, max=50323, avg=12360.40, stdev=7618.67 00:11:53.945 clat percentiles (usec): 00:11:53.945 | 1.00th=[ 2212], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 7439], 00:11:53.945 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:11:53.945 | 70.00th=[12911], 80.00th=[18744], 90.00th=[22938], 95.00th=[28443], 00:11:53.945 | 99.00th=[37487], 99.50th=[42206], 99.90th=[42206], 99.95th=[45351], 00:11:53.945 | 99.99th=[50070] 00:11:53.945 bw ( KiB/s): min=12288, max=24576, per=20.60%, avg=18432.00, stdev=8688.93, samples=2 00:11:53.945 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:11:53.945 lat (msec) : 2=0.44%, 4=1.10%, 10=49.08%, 20=29.49%, 50=19.88% 00:11:53.945 lat (msec) : 100=0.01% 00:11:53.945 cpu : usr=3.78%, sys=4.47%, ctx=377, majf=0, minf=2 00:11:53.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:53.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.945 issued rwts: total=4608,4679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.945 job3: (groupid=0, jobs=1): err= 0: pid=3248572: Thu Nov 28 12:42:23 2024 00:11:53.946 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:11:53.946 slat (nsec): min=988, max=11556k, avg=96881.44, stdev=681481.28 00:11:53.946 clat (usec): min=1977, max=48205, avg=12235.85, stdev=5873.89 00:11:53.946 lat (usec): min=2018, max=48214, avg=12332.74, stdev=5937.85 00:11:53.946 clat percentiles (usec): 00:11:53.946 | 1.00th=[ 3818], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8717], 00:11:53.946 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[12125], 00:11:53.946 | 70.00th=[12911], 80.00th=[14222], 90.00th=[16581], 95.00th=[20579], 00:11:53.946 | 99.00th=[41681], 99.50th=[44303], 99.90th=[47973], 99.95th=[47973], 00:11:53.946 | 99.99th=[47973] 00:11:53.946 write: IOPS=4845, BW=18.9MiB/s (19.8MB/s)(19.1MiB/1010msec); 0 zone resets 00:11:53.946 slat (nsec): min=1696, max=13377k, avg=102395.50, stdev=567933.92 00:11:53.946 clat (usec): min=385, max=48171, avg=14613.12, stdev=9843.06 00:11:53.946 lat (usec): min=545, max=48174, avg=14715.51, stdev=9899.02 00:11:53.946 clat percentiles (usec): 00:11:53.946 | 1.00th=[ 1516], 5.00th=[ 4228], 10.00th=[ 6063], 20.00th=[ 7308], 00:11:53.946 | 30.00th=[ 8225], 40.00th=[10028], 50.00th=[11338], 60.00th=[13173], 00:11:53.946 | 70.00th=[16057], 80.00th=[20317], 90.00th=[32113], 95.00th=[36963], 00:11:53.946 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[47973], 00:11:53.946 | 99.99th=[47973] 00:11:53.946 bw ( KiB/s): min=17648, max=20480, per=21.31%, avg=19064.00, stdev=2002.53, samples=2 00:11:53.946 iops : min= 4412, max= 5120, avg=4766.00, stdev=500.63, samples=2 00:11:53.946 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.01% 00:11:53.946 lat (msec) : 2=0.98%, 4=2.23%, 10=33.22%, 20=50.00%, 50=13.46% 00:11:53.946 cpu : usr=4.06%, sys=4.86%, ctx=434, majf=0, minf=1 00:11:53.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:53.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.946 issued rwts: total=4608,4894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.946 00:11:53.946 Run status group 0 (all jobs): 00:11:53.946 READ: bw=85.3MiB/s (89.4MB/s), 17.8MiB/s-33.3MiB/s (18.7MB/s-34.9MB/s), io=89.2MiB (93.5MB), run=1003-1046msec 00:11:53.946 WRITE: bw=87.4MiB/s (91.6MB/s), 18.1MiB/s-33.9MiB/s (19.0MB/s-35.5MB/s), io=91.4MiB (95.8MB), run=1003-1046msec 00:11:53.946 00:11:53.946 Disk stats (read/write): 00:11:53.946 nvme0n1: ios=6815/7168, merge=0/0, ticks=46474/49635, in_queue=96109, util=91.58% 00:11:53.946 nvme0n2: ios=3926/4096, merge=0/0, ticks=36647/30865, in_queue=67512, util=96.83% 00:11:53.946 nvme0n3: ios=4143/4258, merge=0/0, ticks=31695/25711, in_queue=57406, util=91.23% 00:11:53.946 nvme0n4: ios=3604/4096, merge=0/0, ticks=37250/53086, in_queue=90336, util=96.57% 00:11:53.946 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:53.946 [global] 00:11:53.946 thread=1 00:11:53.946 invalidate=1 00:11:53.946 rw=randwrite 00:11:53.946 time_based=1 00:11:53.946 runtime=1 00:11:53.946 ioengine=libaio 00:11:53.946 direct=1 00:11:53.946 bs=4096 00:11:53.946 iodepth=128 00:11:53.946 norandommap=0 00:11:53.946 numjobs=1 00:11:53.946 00:11:53.946 verify_dump=1 00:11:53.946 verify_backlog=512 00:11:53.946 verify_state_save=0 00:11:53.946 do_verify=1 00:11:53.946 verify=crc32c-intel 00:11:53.946 [job0] 00:11:53.946 filename=/dev/nvme0n1 00:11:53.946 [job1] 00:11:53.946 filename=/dev/nvme0n2 00:11:53.946 [job2] 00:11:53.946 filename=/dev/nvme0n3 00:11:53.946 [job3] 00:11:53.946 filename=/dev/nvme0n4 00:11:53.946 Could not set queue depth (nvme0n1) 00:11:53.946 Could not set queue depth (nvme0n2) 00:11:53.946 Could not set queue depth (nvme0n3) 00:11:53.946 Could not set queue depth (nvme0n4) 00:11:54.515 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:54.515 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:54.515 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:54.515 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:54.515 fio-3.35 00:11:54.515 Starting 4 threads 00:11:55.478 00:11:55.478 job0: (groupid=0, jobs=1): err= 0: pid=3249094: Thu Nov 28 12:42:25 2024 00:11:55.478 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:11:55.478 slat (nsec): min=901, max=14326k, avg=97633.28, stdev=632886.70 00:11:55.478 clat (usec): min=1722, max=70863, avg=12121.37, stdev=6800.61 00:11:55.478 lat (usec): min=1761, max=70871, avg=12219.00, stdev=6871.39 00:11:55.478 clat percentiles (usec): 00:11:55.478 | 1.00th=[ 4621], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 8586], 00:11:55.478 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11469], 00:11:55.478 | 70.00th=[13042], 80.00th=[14615], 90.00th=[16450], 95.00th=[20579], 00:11:55.478 | 99.00th=[51643], 99.50th=[63177], 99.90th=[68682], 99.95th=[70779], 00:11:55.478 | 99.99th=[70779] 00:11:55.478 write: IOPS=5284, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec); 0 zone resets 00:11:55.478 slat (nsec): min=1505, max=13540k, avg=88259.27, stdev=497186.61 00:11:55.478 clat (usec): min=2125, max=70858, avg=12242.20, stdev=10588.58 00:11:55.478 lat (usec): min=2135, max=70871, avg=12330.46, stdev=10656.87 00:11:55.478 clat percentiles (usec): 00:11:55.478 | 1.00th=[ 4113], 5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 7439], 00:11:55.478 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10028], 00:11:55.478 | 70.00th=[11207], 80.00th=[14484], 90.00th=[18482], 95.00th=[22414], 00:11:55.478 | 99.00th=[66323], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:11:55.478 | 99.99th=[70779] 00:11:55.478 bw ( KiB/s): min=17040, max=24576, per=22.82%, avg=20808.00, stdev=5328.76, samples=2 00:11:55.478 iops : min= 4260, max= 6144, avg=5202.00, stdev=1332.19, samples=2 00:11:55.478 lat (msec) : 2=0.06%, 4=0.45%, 10=50.08%, 20=43.02%, 50=4.33% 00:11:55.478 lat (msec) : 100=2.06% 00:11:55.478 cpu : usr=3.78%, sys=4.68%, ctx=454, majf=0, minf=1 00:11:55.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:55.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.478 issued rwts: total=5120,5311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.478 job1: (groupid=0, jobs=1): err= 0: pid=3249095: Thu Nov 28 12:42:25 2024 00:11:55.478 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:11:55.478 slat (nsec): min=898, max=17537k, avg=114864.24, stdev=828608.38 00:11:55.478 clat (usec): min=4531, max=75119, avg=15586.14, stdev=12484.72 00:11:55.478 lat (usec): min=4550, max=85565, avg=15701.00, stdev=12589.74 00:11:55.478 clat percentiles (usec): 00:11:55.478 | 1.00th=[ 5014], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7504], 00:11:55.478 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[10290], 60.00th=[12387], 00:11:55.478 | 70.00th=[15270], 80.00th=[20841], 90.00th=[34866], 95.00th=[46924], 00:11:55.478 | 99.00th=[58983], 99.50th=[63701], 99.90th=[73925], 99.95th=[74974], 00:11:55.478 | 99.99th=[74974] 00:11:55.478 write: IOPS=4744, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1007msec); 0 zone resets 00:11:55.478 slat (nsec): min=1585, max=11737k, avg=94040.86, stdev=641946.76 00:11:55.478 clat (usec): min=1933, max=92534, avg=11654.04, stdev=11095.28 00:11:55.478 lat (usec): min=4117, max=95640, avg=11748.08, stdev=11177.43 00:11:55.478 clat percentiles (usec): 00:11:55.478 | 1.00th=[ 4686], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7111], 00:11:55.478 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8979], 00:11:55.478 | 70.00th=[10814], 80.00th=[12518], 90.00th=[18744], 95.00th=[22676], 00:11:55.478 | 99.00th=[79168], 99.50th=[85459], 99.90th=[89654], 99.95th=[92799], 00:11:55.478 | 99.99th=[92799] 00:11:55.478 bw ( KiB/s): min=12624, max=24625, per=20.43%, avg=18624.50, stdev=8485.99, samples=2 00:11:55.478 iops : min= 3156, max= 6156, avg=4656.00, stdev=2121.32, samples=2 00:11:55.478 lat (msec) : 2=0.01%, 4=0.01%, 10=58.91%, 20=25.83%, 50=12.21% 00:11:55.478 lat (msec) : 100=3.04% 00:11:55.478 cpu : usr=3.08%, sys=4.97%, ctx=410, majf=0, minf=1 00:11:55.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:55.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.479 issued rwts: total=4608,4778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.479 job2: (groupid=0, jobs=1): err= 0: pid=3249096: Thu Nov 28 12:42:25 2024 00:11:55.479 read: IOPS=6227, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1007msec) 00:11:55.479 slat (nsec): min=955, max=9069.4k, avg=77986.75, stdev=558708.42 00:11:55.479 clat (usec): min=3169, max=25217, avg=10031.55, stdev=3093.35 00:11:55.479 lat (usec): min=3434, max=25220, avg=10109.54, stdev=3120.47 00:11:55.479 clat percentiles (usec): 00:11:55.479 | 1.00th=[ 4113], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 7898], 00:11:55.479 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:11:55.479 | 70.00th=[10552], 80.00th=[11994], 90.00th=[13829], 95.00th=[16450], 00:11:55.479 | 99.00th=[21365], 99.50th=[23200], 99.90th=[24511], 99.95th=[25297], 00:11:55.479 | 99.99th=[25297] 00:11:55.479 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:11:55.479 slat (nsec): min=1595, max=7328.1k, avg=70955.12, stdev=449904.44 00:11:55.479 clat (usec): min=1088, max=57991, avg=9737.31, stdev=6975.48 00:11:55.479 lat (usec): min=1469, max=57995, avg=9808.26, stdev=7019.45 00:11:55.479 clat percentiles (usec): 00:11:55.479 | 1.00th=[ 2638], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5604], 00:11:55.479 | 30.00th=[ 6587], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 8848], 00:11:55.479 | 70.00th=[ 9241], 80.00th=[10683], 90.00th=[15926], 95.00th=[22414], 00:11:55.479 | 99.00th=[45876], 99.50th=[52167], 99.90th=[57934], 99.95th=[57934], 00:11:55.479 | 99.99th=[57934] 00:11:55.479 bw ( KiB/s): min=24544, max=28753, per=29.23%, avg=26648.50, stdev=2976.21, samples=2 00:11:55.479 iops : min= 6136, max= 7188, avg=6662.00, stdev=743.88, samples=2 00:11:55.479 lat (msec) : 2=0.24%, 4=1.60%, 10=67.74%, 20=25.62%, 50=4.49% 00:11:55.479 lat (msec) : 100=0.30% 00:11:55.479 cpu : usr=3.78%, sys=7.85%, ctx=467, majf=0, minf=2 00:11:55.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:55.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.479 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.479 job3: (groupid=0, jobs=1): err= 0: pid=3249097: Thu Nov 28 12:42:25 2024 00:11:55.479 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:11:55.479 slat (nsec): min=952, max=9978.7k, avg=84155.95, stdev=468630.18 00:11:55.479 clat (usec): min=4534, max=25310, avg=10704.80, stdev=3658.53 00:11:55.479 lat (usec): min=4553, max=28579, avg=10788.96, stdev=3684.66 00:11:55.479 clat percentiles (usec): 00:11:55.479 | 1.00th=[ 5407], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 7701], 00:11:55.479 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10683], 00:11:55.479 | 70.00th=[11731], 80.00th=[13435], 90.00th=[15401], 95.00th=[17433], 00:11:55.479 | 99.00th=[22676], 99.50th=[25035], 99.90th=[25035], 99.95th=[25297], 00:11:55.479 | 99.99th=[25297] 00:11:55.479 write: IOPS=6188, BW=24.2MiB/s (25.3MB/s)(24.2MiB/1003msec); 0 zone resets 00:11:55.479 slat (nsec): min=1589, max=6662.8k, avg=73562.24, stdev=398234.24 00:11:55.479 clat (usec): min=522, max=25160, avg=9778.75, stdev=3325.82 00:11:55.479 lat (usec): min=3926, max=25168, avg=9852.31, stdev=3346.35 00:11:55.479 clat percentiles (usec): 00:11:55.479 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7111], 00:11:55.479 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8848], 60.00th=[10028], 00:11:55.479 | 70.00th=[11207], 80.00th=[12518], 90.00th=[14484], 95.00th=[15795], 00:11:55.479 | 99.00th=[19530], 99.50th=[20841], 99.90th=[23200], 99.95th=[23200], 00:11:55.479 | 99.99th=[25035] 00:11:55.479 bw ( KiB/s): min=22040, max=27112, per=26.96%, avg=24576.00, stdev=3586.45, samples=2 00:11:55.479 iops : min= 5510, max= 6778, avg=6144.00, stdev=896.61, samples=2 00:11:55.479 lat (usec) : 750=0.01% 00:11:55.479 lat (msec) : 4=0.05%, 10=56.21%, 20=41.83%, 50=1.89% 00:11:55.479 cpu : usr=3.29%, sys=5.29%, ctx=680, majf=0, minf=1 00:11:55.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:55.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.479 issued rwts: total=6144,6207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.479 00:11:55.479 Run status group 0 (all jobs): 00:11:55.479 READ: bw=85.9MiB/s (90.1MB/s), 17.9MiB/s-24.3MiB/s (18.7MB/s-25.5MB/s), io=86.5MiB (90.7MB), run=1003-1007msec 00:11:55.479 WRITE: bw=89.0MiB/s (93.4MB/s), 18.5MiB/s-25.8MiB/s (19.4MB/s-27.1MB/s), io=89.7MiB (94.0MB), run=1003-1007msec 00:11:55.479 00:11:55.479 Disk stats (read/write): 00:11:55.479 nvme0n1: ios=4658/4967, merge=0/0, ticks=18836/23222, in_queue=42058, util=91.48% 00:11:55.479 nvme0n2: ios=4472/4608, merge=0/0, ticks=24398/18641, in_queue=43039, util=87.77% 00:11:55.479 nvme0n3: ios=5165/5371, merge=0/0, ticks=42860/48323, in_queue=91183, util=92.19% 00:11:55.479 nvme0n4: ios=4629/5028, merge=0/0, ticks=18179/15821, in_queue=34000, util=97.01% 00:11:55.479 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:55.479 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3249431 00:11:55.479 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:55.479 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:55.479 [global] 00:11:55.479 thread=1 00:11:55.479 invalidate=1 00:11:55.479 rw=read 00:11:55.479 time_based=1 00:11:55.479 runtime=10 00:11:55.479 ioengine=libaio 00:11:55.479 direct=1 00:11:55.479 bs=4096 00:11:55.479 iodepth=1 00:11:55.479 norandommap=1 00:11:55.479 numjobs=1 00:11:55.479 00:11:55.479 [job0] 00:11:55.479 filename=/dev/nvme0n1 00:11:55.760 [job1] 00:11:55.760 filename=/dev/nvme0n2 00:11:55.760 [job2] 00:11:55.760 filename=/dev/nvme0n3 00:11:55.760 [job3] 00:11:55.760 filename=/dev/nvme0n4 00:11:55.760 Could not set queue depth (nvme0n1) 00:11:55.760 Could not set queue depth (nvme0n2) 00:11:55.760 Could not set queue depth (nvme0n3) 00:11:55.760 Could not set queue depth (nvme0n4) 00:11:56.027 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.027 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.028 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.028 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.028 fio-3.35 00:11:56.028 Starting 4 threads 00:11:58.598 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:58.906 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=307200, buflen=4096 00:11:58.906 fio: pid=3249627, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:58.906 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:58.906 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4526080, buflen=4096 00:11:58.906 fio: pid=3249626, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:58.906 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:58.906 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:59.175 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.175 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:59.175 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2932736, buflen=4096 00:11:59.175 fio: pid=3249623, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:59.437 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6955008, buflen=4096 00:11:59.437 fio: pid=3249624, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:59.437 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.437 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:59.437 00:11:59.437 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3249623: Thu Nov 28 12:42:29 2024 00:11:59.437 read: IOPS=237, BW=947KiB/s (970kB/s)(2864KiB/3023msec) 00:11:59.437 slat (usec): min=6, max=15174, avg=87.92, stdev=952.02 00:11:59.437 clat (usec): min=577, max=42049, avg=4090.42, stdev=10857.01 00:11:59.437 lat (usec): min=585, max=56523, avg=4178.43, stdev=11033.96 00:11:59.437 clat percentiles (usec): 00:11:59.437 | 1.00th=[ 701], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 930], 00:11:59.437 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:11:59.437 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[41681], 00:11:59.437 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:59.437 | 99.99th=[42206] 00:11:59.437 bw ( KiB/s): min= 96, max= 4008, per=24.76%, avg=1126.40, stdev=1697.99, samples=5 00:11:59.437 iops : min= 24, max= 1002, avg=281.60, stdev=424.50, samples=5 00:11:59.437 lat (usec) : 750=1.81%, 1000=65.13% 00:11:59.437 lat (msec) : 2=25.24%, 50=7.67% 00:11:59.437 cpu : usr=0.63%, sys=0.73%, ctx=720, majf=0, minf=1 00:11:59.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:59.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.437 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.437 issued rwts: total=717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:59.437 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3249624: Thu Nov 28 12:42:29 2024 00:11:59.437 read: IOPS=537, BW=2149KiB/s (2200kB/s)(6792KiB/3161msec) 00:11:59.437 slat (usec): min=7, max=28269, avg=84.84, stdev=1044.06 00:11:59.437 clat (usec): min=513, max=41879, avg=1763.85, stdev=5182.24 00:11:59.437 lat (usec): min=539, max=41904, avg=1848.72, stdev=5278.98 00:11:59.437 clat percentiles (usec): 00:11:59.437 | 1.00th=[ 717], 5.00th=[ 857], 10.00th=[ 947], 20.00th=[ 996], 00:11:59.437 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:11:59.437 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:11:59.437 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:59.437 | 99.99th=[41681] 00:11:59.437 bw ( KiB/s): min= 976, max= 3528, per=46.28%, avg=2105.33, stdev=1167.77, samples=6 00:11:59.437 iops : min= 244, max= 882, avg=526.33, stdev=291.94, samples=6 00:11:59.437 lat (usec) : 750=1.24%, 1000=18.89% 00:11:59.437 lat (msec) : 2=77.93%, 4=0.06%, 10=0.06%, 50=1.77% 00:11:59.437 cpu : usr=0.73%, sys=1.49%, ctx=1705, majf=0, minf=2 00:11:59.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:59.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.437 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.437 issued rwts: total=1699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:59.437 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3249626: Thu Nov 28 12:42:29 2024 00:11:59.437 read: IOPS=395, BW=1581KiB/s (1619kB/s)(4420KiB/2796msec) 00:11:59.437 slat (usec): min=7, max=15328, avg=52.40, stdev=638.18 00:11:59.437 clat (usec): min=473, max=42047, avg=2447.31, stdev=7575.62 00:11:59.437 lat (usec): min=500, max=42073, avg=2499.75, stdev=7596.49 00:11:59.437 clat percentiles (usec): 00:11:59.437 | 1.00th=[ 627], 5.00th=[ 676], 10.00th=[ 717], 20.00th=[ 775], 00:11:59.437 | 30.00th=[ 816], 40.00th=[ 996], 50.00th=[ 1074], 60.00th=[ 1106], 00:11:59.437 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1270], 00:11:59.437 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:59.437 | 99.99th=[42206] 00:11:59.437 bw ( KiB/s): min= 96, max= 3680, per=28.14%, avg=1280.00, stdev=1658.45, samples=5 00:11:59.437 iops : min= 24, max= 920, avg=320.00, stdev=414.61, samples=5 00:11:59.437 lat (usec) : 500=0.09%, 750=15.10%, 1000=25.41% 00:11:59.437 lat (msec) : 2=55.70%, 50=3.62% 00:11:59.437 cpu : usr=0.39%, sys=1.18%, ctx=1109, majf=0, minf=2 00:11:59.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:59.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.437 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.438 issued rwts: total=1106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:59.438 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3249627: Thu Nov 28 12:42:29 2024 00:11:59.438 read: IOPS=29, BW=115KiB/s (118kB/s)(300KiB/2607msec) 00:11:59.438 slat (nsec): min=7304, max=43898, avg=26507.76, stdev=4218.29 00:11:59.438 clat (usec): min=545, max=41216, avg=34380.89, stdev=14847.51 00:11:59.438 lat (usec): min=588, max=41243, avg=34407.39, stdev=14848.57 00:11:59.438 clat percentiles (usec): 00:11:59.438 | 1.00th=[ 545], 5.00th=[ 611], 10.00th=[ 775], 20.00th=[40633], 00:11:59.438 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:59.438 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:59.438 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:59.438 | 99.99th=[41157] 00:11:59.438 bw ( KiB/s): min= 96, max= 192, per=2.55%, avg=116.80, stdev=42.18, samples=5 00:11:59.438 iops : min= 24, max= 48, avg=29.20, stdev=10.55, samples=5 00:11:59.438 lat (usec) : 750=9.21%, 1000=5.26% 00:11:59.438 lat (msec) : 2=1.32%, 50=82.89% 00:11:59.438 cpu : usr=0.00%, sys=0.12%, ctx=76, majf=0, minf=2 00:11:59.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:59.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.438 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.438 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:59.438 00:11:59.438 Run status group 0 (all jobs): 00:11:59.438 READ: bw=4548KiB/s (4657kB/s), 115KiB/s-2149KiB/s (118kB/s-2200kB/s), io=14.0MiB (14.7MB), run=2607-3161msec 00:11:59.438 00:11:59.438 Disk stats (read/write): 00:11:59.438 nvme0n1: ios=712/0, merge=0/0, ticks=2756/0, in_queue=2756, util=93.32% 00:11:59.438 nvme0n2: ios=1644/0, merge=0/0, ticks=2873/0, in_queue=2873, util=92.66% 00:11:59.438 nvme0n3: ios=907/0, merge=0/0, ticks=2534/0, in_queue=2534, util=95.99% 00:11:59.438 nvme0n4: ios=75/0, merge=0/0, ticks=2580/0, in_queue=2580, util=96.39% 00:11:59.438 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.438 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:59.698 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.698 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:59.989 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.989 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:59.989 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.990 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3249431 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:00.251 nvmf hotplug test: fio failed as expected 00:12:00.251 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.512 rmmod nvme_tcp 00:12:00.512 rmmod nvme_fabrics 00:12:00.512 rmmod nvme_keyring 00:12:00.512 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3245726 ']' 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3245726 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3245726 ']' 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3245726 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3245726 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3245726' 00:12:00.773 killing process with pid 3245726 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3245726 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3245726 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.773 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.321 00:12:03.321 real 0m29.441s 00:12:03.321 user 2m36.970s 00:12:03.321 sys 0m9.462s 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.321 ************************************ 00:12:03.321 END TEST nvmf_fio_target 00:12:03.321 ************************************ 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:03.321 ************************************ 00:12:03.321 START TEST nvmf_bdevio 00:12:03.321 ************************************ 00:12:03.321 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:03.321 * Looking for test storage... 00:12:03.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.321 --rc genhtml_branch_coverage=1 00:12:03.321 --rc genhtml_function_coverage=1 00:12:03.321 --rc genhtml_legend=1 00:12:03.321 --rc geninfo_all_blocks=1 00:12:03.321 --rc geninfo_unexecuted_blocks=1 00:12:03.321 00:12:03.321 ' 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.321 --rc genhtml_branch_coverage=1 00:12:03.321 --rc genhtml_function_coverage=1 00:12:03.321 --rc genhtml_legend=1 00:12:03.321 --rc geninfo_all_blocks=1 00:12:03.321 --rc geninfo_unexecuted_blocks=1 00:12:03.321 00:12:03.321 ' 00:12:03.321 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.321 --rc genhtml_branch_coverage=1 00:12:03.321 --rc genhtml_function_coverage=1 00:12:03.321 --rc genhtml_legend=1 00:12:03.321 --rc geninfo_all_blocks=1 00:12:03.321 --rc geninfo_unexecuted_blocks=1 00:12:03.321 00:12:03.322 ' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.322 --rc genhtml_branch_coverage=1 00:12:03.322 --rc genhtml_function_coverage=1 00:12:03.322 --rc genhtml_legend=1 00:12:03.322 --rc geninfo_all_blocks=1 00:12:03.322 --rc geninfo_unexecuted_blocks=1 00:12:03.322 00:12:03.322 ' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.322 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.463 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:11.464 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:11.464 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:11.464 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:11.464 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:12:11.464 00:12:11.464 --- 10.0.0.2 ping statistics --- 00:12:11.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.464 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:12:11.464 00:12:11.464 --- 10.0.0.1 ping statistics --- 00:12:11.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.464 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3254691 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3254691 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3254691 ']' 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.464 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.464 [2024-11-28 12:42:40.744504] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:11.464 [2024-11-28 12:42:40.744574] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.464 [2024-11-28 12:42:40.889624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:11.464 [2024-11-28 12:42:40.949493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.464 [2024-11-28 12:42:40.968286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.464 [2024-11-28 12:42:40.968315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.465 [2024-11-28 12:42:40.968323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.465 [2024-11-28 12:42:40.968330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.465 [2024-11-28 12:42:40.968335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.465 [2024-11-28 12:42:40.970139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:11.465 [2024-11-28 12:42:40.970342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.465 [2024-11-28 12:42:40.970343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:11.465 [2024-11-28 12:42:40.970190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:11.465 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.465 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:11.465 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.465 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.465 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.725 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 [2024-11-28 12:42:41.605898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 Malloc0 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 [2024-11-28 12:42:41.674818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:11.726 { 00:12:11.726 "params": { 00:12:11.726 "name": "Nvme$subsystem", 00:12:11.726 "trtype": "$TEST_TRANSPORT", 00:12:11.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:11.726 "adrfam": "ipv4", 00:12:11.726 "trsvcid": "$NVMF_PORT", 00:12:11.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:11.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:11.726 "hdgst": ${hdgst:-false}, 00:12:11.726 "ddgst": ${ddgst:-false} 00:12:11.726 }, 00:12:11.726 "method": "bdev_nvme_attach_controller" 00:12:11.726 } 00:12:11.726 EOF 00:12:11.726 )") 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:11.726 12:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:11.726 "params": { 00:12:11.726 "name": "Nvme1", 00:12:11.726 "trtype": "tcp", 00:12:11.726 "traddr": "10.0.0.2", 00:12:11.726 "adrfam": "ipv4", 00:12:11.726 "trsvcid": "4420", 00:12:11.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:11.726 "hdgst": false, 00:12:11.726 "ddgst": false 00:12:11.726 }, 00:12:11.726 "method": "bdev_nvme_attach_controller" 00:12:11.726 }' 00:12:11.726 [2024-11-28 12:42:41.731889] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:11.726 [2024-11-28 12:42:41.731959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255019 ] 00:12:11.986 [2024-11-28 12:42:41.869689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:11.986 [2024-11-28 12:42:41.929792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.986 [2024-11-28 12:42:41.961438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.986 [2024-11-28 12:42:41.961665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.986 [2024-11-28 12:42:41.961666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.247 I/O targets: 00:12:12.247 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:12.247 00:12:12.247 00:12:12.247 CUnit - A unit testing framework for C - Version 2.1-3 00:12:12.247 http://cunit.sourceforge.net/ 00:12:12.247 00:12:12.247 00:12:12.247 Suite: bdevio tests on: Nvme1n1 00:12:12.247 Test: blockdev write read block ...passed 00:12:12.247 Test: blockdev write zeroes read block ...passed 00:12:12.247 Test: blockdev write zeroes read no split ...passed 00:12:12.247 Test: blockdev write zeroes read split ...passed 00:12:12.247 Test: blockdev write zeroes read split partial ...passed 00:12:12.247 Test: blockdev reset ...[2024-11-28 12:42:42.287724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:12.247 [2024-11-28 12:42:42.287797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8c10 (9): Bad file descriptor 00:12:12.247 [2024-11-28 12:42:42.305487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:12.247 passed 00:12:12.247 Test: blockdev write read 8 blocks ...passed 00:12:12.247 Test: blockdev write read size > 128k ...passed 00:12:12.247 Test: blockdev write read invalid size ...passed 00:12:12.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.508 Test: blockdev write read max offset ...passed 00:12:12.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.508 Test: blockdev writev readv 8 blocks ...passed 00:12:12.508 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.508 Test: blockdev writev readv block ...passed 00:12:12.508 Test: blockdev writev readv size > 128k ...passed 00:12:12.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.508 Test: blockdev comparev and writev ...[2024-11-28 12:42:42.573028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.508 [2024-11-28 12:42:42.573088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:12.508 [2024-11-28 12:42:42.573106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.573116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:12.509 [2024-11-28 12:42:42.573708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.573720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:12.509 [2024-11-28 12:42:42.573735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.573743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:12.509 [2024-11-28 12:42:42.574312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.574323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:12.509 [2024-11-28 12:42:42.574337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.574346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:12.509 [2024-11-28 12:42:42.574866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.574878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:12.509 [2024-11-28 12:42:42.574893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.509 [2024-11-28 12:42:42.574902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:12.509 passed 00:12:12.769 Test: blockdev nvme passthru rw ...passed 00:12:12.769 Test: blockdev nvme passthru vendor specific ...[2024-11-28 12:42:42.658972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.769 [2024-11-28 12:42:42.658987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:12.769 [2024-11-28 12:42:42.659406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.769 [2024-11-28 12:42:42.659418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:12.769 [2024-11-28 12:42:42.659821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.769 [2024-11-28 12:42:42.659832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:12.769 [2024-11-28 12:42:42.660230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.769 [2024-11-28 12:42:42.660243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:12.770 passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 00:12:12.770 Run Summary: Type Total Ran Passed Failed Inactive 00:12:12.770 suites 1 1 n/a 0 0 00:12:12.770 tests 23 23 23 0 0 00:12:12.770 asserts 152 152 152 0 n/a 00:12:12.770 00:12:12.770 Elapsed time = 1.214 seconds 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.770 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.770 rmmod nvme_tcp 00:12:12.770 rmmod nvme_fabrics 00:12:12.770 rmmod nvme_keyring 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3254691 ']' 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3254691 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3254691 ']' 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3254691 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3254691 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3254691' 00:12:13.029 killing process with pid 3254691 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3254691 00:12:13.029 12:42:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3254691 00:12:13.029 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.289 12:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.201 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.201 00:12:15.201 real 0m12.262s 00:12:15.201 user 0m12.867s 00:12:15.201 sys 0m6.274s 00:12:15.201 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.201 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.201 ************************************ 00:12:15.201 END TEST nvmf_bdevio 00:12:15.201 ************************************ 00:12:15.201 12:42:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:15.201 00:12:15.201 real 5m6.471s 00:12:15.201 user 11m50.486s 00:12:15.201 sys 1m52.505s 00:12:15.201 12:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.201 12:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:15.201 ************************************ 00:12:15.201 END TEST nvmf_target_core 00:12:15.201 ************************************ 00:12:15.201 12:42:45 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:15.201 12:42:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.201 12:42:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.201 12:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.461 ************************************ 00:12:15.461 START TEST nvmf_target_extra 00:12:15.461 ************************************ 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:15.461 * Looking for test storage... 00:12:15.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.461 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.462 --rc genhtml_branch_coverage=1 00:12:15.462 --rc genhtml_function_coverage=1 00:12:15.462 --rc genhtml_legend=1 00:12:15.462 --rc geninfo_all_blocks=1 00:12:15.462 --rc geninfo_unexecuted_blocks=1 00:12:15.462 00:12:15.462 ' 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.462 --rc genhtml_branch_coverage=1 00:12:15.462 --rc genhtml_function_coverage=1 00:12:15.462 --rc genhtml_legend=1 00:12:15.462 --rc geninfo_all_blocks=1 00:12:15.462 --rc geninfo_unexecuted_blocks=1 00:12:15.462 00:12:15.462 ' 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.462 --rc genhtml_branch_coverage=1 00:12:15.462 --rc genhtml_function_coverage=1 00:12:15.462 --rc genhtml_legend=1 00:12:15.462 --rc geninfo_all_blocks=1 00:12:15.462 --rc geninfo_unexecuted_blocks=1 00:12:15.462 00:12:15.462 ' 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.462 --rc genhtml_branch_coverage=1 00:12:15.462 --rc genhtml_function_coverage=1 00:12:15.462 --rc genhtml_legend=1 00:12:15.462 --rc geninfo_all_blocks=1 00:12:15.462 --rc geninfo_unexecuted_blocks=1 00:12:15.462 00:12:15.462 ' 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.462 12:42:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.723 ************************************ 00:12:15.723 START TEST nvmf_example 00:12:15.723 ************************************ 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:15.723 * Looking for test storage... 00:12:15.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.723 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.724 --rc genhtml_branch_coverage=1 00:12:15.724 --rc genhtml_function_coverage=1 00:12:15.724 --rc genhtml_legend=1 00:12:15.724 --rc geninfo_all_blocks=1 00:12:15.724 --rc geninfo_unexecuted_blocks=1 00:12:15.724 00:12:15.724 ' 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.724 --rc genhtml_branch_coverage=1 00:12:15.724 --rc genhtml_function_coverage=1 00:12:15.724 --rc genhtml_legend=1 00:12:15.724 --rc geninfo_all_blocks=1 00:12:15.724 --rc geninfo_unexecuted_blocks=1 00:12:15.724 00:12:15.724 ' 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.724 --rc genhtml_branch_coverage=1 00:12:15.724 --rc genhtml_function_coverage=1 00:12:15.724 --rc genhtml_legend=1 00:12:15.724 --rc geninfo_all_blocks=1 00:12:15.724 --rc geninfo_unexecuted_blocks=1 00:12:15.724 00:12:15.724 ' 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.724 --rc genhtml_branch_coverage=1 00:12:15.724 --rc genhtml_function_coverage=1 00:12:15.724 --rc genhtml_legend=1 00:12:15.724 --rc geninfo_all_blocks=1 00:12:15.724 --rc geninfo_unexecuted_blocks=1 00:12:15.724 00:12:15.724 ' 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.724 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.984 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.985 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.121 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.121 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:24.121 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:24.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:24.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:24.122 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:24.122 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:24.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:12:24.122 00:12:24.122 --- 10.0.0.2 ping statistics --- 00:12:24.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.122 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:12:24.122 00:12:24.122 --- 10.0.0.1 ping statistics --- 00:12:24.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.122 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:24.122 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3259627 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3259627 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3259627 ']' 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.123 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:24.383 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:36.615 Initializing NVMe Controllers 00:12:36.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:36.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:36.615 Initialization complete. Launching workers. 00:12:36.615 ======================================================== 00:12:36.615 Latency(us) 00:12:36.615 Device Information : IOPS MiB/s Average min max 00:12:36.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18379.50 71.79 3482.08 626.18 19818.13 00:12:36.615 ======================================================== 00:12:36.615 Total : 18379.50 71.79 3482.08 626.18 19818.13 00:12:36.615 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.615 rmmod nvme_tcp 00:12:36.615 rmmod nvme_fabrics 00:12:36.615 rmmod nvme_keyring 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3259627 ']' 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3259627 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3259627 ']' 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3259627 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259627 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259627' 00:12:36.615 killing process with pid 3259627 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3259627 00:12:36.615 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3259627 00:12:36.615 nvmf threads initialize successfully 00:12:36.615 bdev subsystem init successfully 00:12:36.615 created a nvmf target service 00:12:36.615 create targets's poll groups done 00:12:36.615 all subsystems of target started 00:12:36.615 nvmf target is running 00:12:36.615 all subsystems of target stopped 00:12:36.615 destroy targets's poll groups done 00:12:36.615 destroyed the nvmf target service 00:12:36.615 bdev subsystem finish successfully 00:12:36.615 nvmf threads destroy successfully 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.615 12:43:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:37.187 00:12:37.187 real 0m21.585s 00:12:37.187 user 0m46.680s 00:12:37.187 sys 0m7.062s 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:37.187 ************************************ 00:12:37.187 END TEST nvmf_example 00:12:37.187 ************************************ 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.187 ************************************ 00:12:37.187 START TEST nvmf_filesystem 00:12:37.187 ************************************ 00:12:37.187 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:37.450 * Looking for test storage... 00:12:37.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.450 --rc genhtml_branch_coverage=1 00:12:37.450 --rc genhtml_function_coverage=1 00:12:37.450 --rc genhtml_legend=1 00:12:37.450 --rc geninfo_all_blocks=1 00:12:37.450 --rc geninfo_unexecuted_blocks=1 00:12:37.450 00:12:37.450 ' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.450 --rc genhtml_branch_coverage=1 00:12:37.450 --rc genhtml_function_coverage=1 00:12:37.450 --rc genhtml_legend=1 00:12:37.450 --rc geninfo_all_blocks=1 00:12:37.450 --rc geninfo_unexecuted_blocks=1 00:12:37.450 00:12:37.450 ' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.450 --rc genhtml_branch_coverage=1 00:12:37.450 --rc genhtml_function_coverage=1 00:12:37.450 --rc genhtml_legend=1 00:12:37.450 --rc geninfo_all_blocks=1 00:12:37.450 --rc geninfo_unexecuted_blocks=1 00:12:37.450 00:12:37.450 ' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.450 --rc genhtml_branch_coverage=1 00:12:37.450 --rc genhtml_function_coverage=1 00:12:37.450 --rc genhtml_legend=1 00:12:37.450 --rc geninfo_all_blocks=1 00:12:37.450 --rc geninfo_unexecuted_blocks=1 00:12:37.450 00:12:37.450 ' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:37.450 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:37.451 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:37.451 #define SPDK_CONFIG_H 00:12:37.451 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:37.451 #define SPDK_CONFIG_APPS 1 00:12:37.451 #define SPDK_CONFIG_ARCH native 00:12:37.451 #undef SPDK_CONFIG_ASAN 00:12:37.451 #undef SPDK_CONFIG_AVAHI 00:12:37.451 #undef SPDK_CONFIG_CET 00:12:37.451 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:37.451 #define SPDK_CONFIG_COVERAGE 1 00:12:37.451 #define SPDK_CONFIG_CROSS_PREFIX 00:12:37.451 #undef SPDK_CONFIG_CRYPTO 00:12:37.451 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:37.451 #undef SPDK_CONFIG_CUSTOMOCF 00:12:37.451 #undef SPDK_CONFIG_DAOS 00:12:37.451 #define SPDK_CONFIG_DAOS_DIR 00:12:37.451 #define SPDK_CONFIG_DEBUG 1 00:12:37.451 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:37.451 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:37.451 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:37.451 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:37.451 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:37.451 #undef SPDK_CONFIG_DPDK_UADK 00:12:37.451 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:37.451 #define SPDK_CONFIG_EXAMPLES 1 00:12:37.451 #undef SPDK_CONFIG_FC 00:12:37.451 #define SPDK_CONFIG_FC_PATH 00:12:37.451 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:37.451 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:37.451 #define SPDK_CONFIG_FSDEV 1 00:12:37.451 #undef SPDK_CONFIG_FUSE 00:12:37.451 #undef SPDK_CONFIG_FUZZER 00:12:37.451 #define SPDK_CONFIG_FUZZER_LIB 00:12:37.451 #undef SPDK_CONFIG_GOLANG 00:12:37.451 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:37.451 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:37.451 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:37.451 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:37.451 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:37.451 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:37.451 #undef SPDK_CONFIG_HAVE_LZ4 00:12:37.451 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:37.451 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:37.451 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:37.451 #define SPDK_CONFIG_IDXD 1 00:12:37.451 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:37.451 #undef SPDK_CONFIG_IPSEC_MB 00:12:37.451 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:37.451 #define SPDK_CONFIG_ISAL 1 00:12:37.451 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:37.451 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:37.451 #define SPDK_CONFIG_LIBDIR 00:12:37.451 #undef SPDK_CONFIG_LTO 00:12:37.451 #define SPDK_CONFIG_MAX_LCORES 128 00:12:37.451 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:37.451 #define SPDK_CONFIG_NVME_CUSE 1 00:12:37.451 #undef SPDK_CONFIG_OCF 00:12:37.451 #define SPDK_CONFIG_OCF_PATH 00:12:37.451 #define SPDK_CONFIG_OPENSSL_PATH 00:12:37.451 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:37.451 #define SPDK_CONFIG_PGO_DIR 00:12:37.451 #undef SPDK_CONFIG_PGO_USE 00:12:37.451 #define SPDK_CONFIG_PREFIX /usr/local 00:12:37.451 #undef SPDK_CONFIG_RAID5F 00:12:37.451 #undef SPDK_CONFIG_RBD 00:12:37.451 #define SPDK_CONFIG_RDMA 1 00:12:37.451 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:37.451 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:37.451 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:37.451 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:37.451 #define SPDK_CONFIG_SHARED 1 00:12:37.451 #undef SPDK_CONFIG_SMA 00:12:37.451 #define SPDK_CONFIG_TESTS 1 00:12:37.451 #undef SPDK_CONFIG_TSAN 00:12:37.451 #define SPDK_CONFIG_UBLK 1 00:12:37.451 #define SPDK_CONFIG_UBSAN 1 00:12:37.451 #undef SPDK_CONFIG_UNIT_TESTS 00:12:37.451 #undef SPDK_CONFIG_URING 00:12:37.451 #define SPDK_CONFIG_URING_PATH 00:12:37.451 #undef SPDK_CONFIG_URING_ZNS 00:12:37.451 #undef SPDK_CONFIG_USDT 00:12:37.451 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:37.451 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:37.451 #define SPDK_CONFIG_VFIO_USER 1 00:12:37.452 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:37.452 #define SPDK_CONFIG_VHOST 1 00:12:37.452 #define SPDK_CONFIG_VIRTIO 1 00:12:37.452 #undef SPDK_CONFIG_VTUNE 00:12:37.452 #define SPDK_CONFIG_VTUNE_DIR 00:12:37.452 #define SPDK_CONFIG_WERROR 1 00:12:37.452 #define SPDK_CONFIG_WPDK_DIR 00:12:37.452 #undef SPDK_CONFIG_XNVME 00:12:37.452 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:37.452 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:37.715 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:37.716 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3262435 ]] 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3262435 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.VdA1KF 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VdA1KF/tests/target /tmp/spdk.VdA1KF 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:37.717 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=116596305920 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12760203264 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677015552 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1241088 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:37.718 * Looking for test storage... 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=116596305920 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=14974795776 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.718 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.719 --rc genhtml_branch_coverage=1 00:12:37.719 --rc genhtml_function_coverage=1 00:12:37.719 --rc genhtml_legend=1 00:12:37.719 --rc geninfo_all_blocks=1 00:12:37.719 --rc geninfo_unexecuted_blocks=1 00:12:37.719 00:12:37.719 ' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.719 --rc genhtml_branch_coverage=1 00:12:37.719 --rc genhtml_function_coverage=1 00:12:37.719 --rc genhtml_legend=1 00:12:37.719 --rc geninfo_all_blocks=1 00:12:37.719 --rc geninfo_unexecuted_blocks=1 00:12:37.719 00:12:37.719 ' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.719 --rc genhtml_branch_coverage=1 00:12:37.719 --rc genhtml_function_coverage=1 00:12:37.719 --rc genhtml_legend=1 00:12:37.719 --rc geninfo_all_blocks=1 00:12:37.719 --rc geninfo_unexecuted_blocks=1 00:12:37.719 00:12:37.719 ' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.719 --rc genhtml_branch_coverage=1 00:12:37.719 --rc genhtml_function_coverage=1 00:12:37.719 --rc genhtml_legend=1 00:12:37.719 --rc geninfo_all_blocks=1 00:12:37.719 --rc geninfo_unexecuted_blocks=1 00:12:37.719 00:12:37.719 ' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.719 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:45.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:45.866 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:45.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:45.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:45.867 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:12:45.867 00:12:45.867 --- 10.0.0.2 ping statistics --- 00:12:45.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.867 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:12:45.867 00:12:45.867 --- 10.0.0.1 ping statistics --- 00:12:45.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.867 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.867 ************************************ 00:12:45.867 START TEST nvmf_filesystem_no_in_capsule 00:12:45.867 ************************************ 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3266175 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3266175 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3266175 ']' 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.867 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.867 [2024-11-28 12:43:15.519731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:45.867 [2024-11-28 12:43:15.519796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.868 [2024-11-28 12:43:15.663822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:45.868 [2024-11-28 12:43:15.722225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.868 [2024-11-28 12:43:15.750954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.868 [2024-11-28 12:43:15.750999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.868 [2024-11-28 12:43:15.751007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.868 [2024-11-28 12:43:15.751014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.868 [2024-11-28 12:43:15.751020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.868 [2024-11-28 12:43:15.753138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.868 [2024-11-28 12:43:15.753299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.868 [2024-11-28 12:43:15.753384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.868 [2024-11-28 12:43:15.753384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.440 [2024-11-28 12:43:16.401046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.440 Malloc1 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.440 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.702 [2024-11-28 12:43:16.569450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:46.702 { 00:12:46.702 "name": "Malloc1", 00:12:46.702 "aliases": [ 00:12:46.702 "bab4ddd1-1b80-4c23-959e-cea0cb2a6829" 00:12:46.702 ], 00:12:46.702 "product_name": "Malloc disk", 00:12:46.702 "block_size": 512, 00:12:46.702 "num_blocks": 1048576, 00:12:46.702 "uuid": "bab4ddd1-1b80-4c23-959e-cea0cb2a6829", 00:12:46.702 "assigned_rate_limits": { 00:12:46.702 "rw_ios_per_sec": 0, 00:12:46.702 "rw_mbytes_per_sec": 0, 00:12:46.702 "r_mbytes_per_sec": 0, 00:12:46.702 "w_mbytes_per_sec": 0 00:12:46.702 }, 00:12:46.702 "claimed": true, 00:12:46.702 "claim_type": "exclusive_write", 00:12:46.702 "zoned": false, 00:12:46.702 "supported_io_types": { 00:12:46.702 "read": true, 00:12:46.702 "write": true, 00:12:46.702 "unmap": true, 00:12:46.702 "flush": true, 00:12:46.702 "reset": true, 00:12:46.702 "nvme_admin": false, 00:12:46.702 "nvme_io": false, 00:12:46.702 "nvme_io_md": false, 00:12:46.702 "write_zeroes": true, 00:12:46.702 "zcopy": true, 00:12:46.702 "get_zone_info": false, 00:12:46.702 "zone_management": false, 00:12:46.702 "zone_append": false, 00:12:46.702 "compare": false, 00:12:46.702 "compare_and_write": false, 00:12:46.702 "abort": true, 00:12:46.702 "seek_hole": false, 00:12:46.702 "seek_data": false, 00:12:46.702 "copy": true, 00:12:46.702 "nvme_iov_md": false 00:12:46.702 }, 00:12:46.702 "memory_domains": [ 00:12:46.702 { 00:12:46.702 "dma_device_id": "system", 00:12:46.702 "dma_device_type": 1 00:12:46.702 }, 00:12:46.702 { 00:12:46.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.702 "dma_device_type": 2 00:12:46.702 } 00:12:46.702 ], 00:12:46.702 "driver_specific": {} 00:12:46.702 } 00:12:46.702 ]' 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:46.702 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:46.703 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:46.703 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:46.703 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:46.703 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:46.703 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:46.703 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.087 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.087 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:48.087 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.087 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:48.087 12:43:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:50.628 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:50.629 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:50.888 12:43:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.828 ************************************ 00:12:51.828 START TEST filesystem_ext4 00:12:51.828 ************************************ 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:51.828 12:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:51.828 mke2fs 1.47.0 (5-Feb-2023) 00:12:52.089 Discarding device blocks: 0/522240 done 00:12:52.089 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:52.089 Filesystem UUID: 535b8c06-75a0-4774-a0a4-55741b1d425a 00:12:52.089 Superblock backups stored on blocks: 00:12:52.089 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:52.089 00:12:52.089 Allocating group tables: 0/64 done 00:12:52.089 Writing inode tables: 0/64 done 00:12:52.089 Creating journal (8192 blocks): done 00:12:52.089 Writing superblocks and filesystem accounting information: 0/64 done 00:12:52.089 00:12:52.089 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:52.089 12:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3266175 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.375 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.635 00:12:57.635 real 0m5.565s 00:12:57.635 user 0m0.032s 00:12:57.635 sys 0m0.072s 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 ************************************ 00:12:57.635 END TEST filesystem_ext4 00:12:57.635 ************************************ 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.635 ************************************ 00:12:57.635 START TEST filesystem_btrfs 00:12:57.635 ************************************ 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:57.635 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.636 12:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:57.896 btrfs-progs v6.8.1 00:12:57.896 See https://btrfs.readthedocs.io for more information. 00:12:57.896 00:12:57.896 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:57.896 NOTE: several default settings have changed in version 5.15, please make sure 00:12:57.896 this does not affect your deployments: 00:12:57.896 - DUP for metadata (-m dup) 00:12:57.896 - enabled no-holes (-O no-holes) 00:12:57.896 - enabled free-space-tree (-R free-space-tree) 00:12:57.896 00:12:57.896 Label: (null) 00:12:57.896 UUID: 8204c53d-96e9-46fa-80d6-e46986986cf2 00:12:57.896 Node size: 16384 00:12:57.896 Sector size: 4096 (CPU page size: 4096) 00:12:57.896 Filesystem size: 510.00MiB 00:12:57.896 Block group profiles: 00:12:57.896 Data: single 8.00MiB 00:12:57.896 Metadata: DUP 32.00MiB 00:12:57.896 System: DUP 8.00MiB 00:12:57.896 SSD detected: yes 00:12:57.896 Zoned device: no 00:12:57.896 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:57.896 Checksum: crc32c 00:12:57.896 Number of devices: 1 00:12:57.896 Devices: 00:12:57.896 ID SIZE PATH 00:12:57.896 1 510.00MiB /dev/nvme0n1p1 00:12:57.896 00:12:57.896 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:57.896 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3266175 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:58.158 00:12:58.158 real 0m0.674s 00:12:58.158 user 0m0.034s 00:12:58.158 sys 0m0.114s 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.158 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 ************************************ 00:12:58.158 END TEST filesystem_btrfs 00:12:58.158 ************************************ 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.466 ************************************ 00:12:58.466 START TEST filesystem_xfs 00:12:58.466 ************************************ 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:58.466 12:43:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:58.466 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:58.466 = sectsz=512 attr=2, projid32bit=1 00:12:58.466 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:58.466 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:58.466 data = bsize=4096 blocks=130560, imaxpct=25 00:12:58.466 = sunit=0 swidth=0 blks 00:12:58.466 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:58.466 log =internal log bsize=4096 blocks=16384, version=2 00:12:58.466 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:58.466 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:59.429 Discarding blocks...Done. 00:12:59.429 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:59.429 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3266175 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:01.343 00:13:01.343 real 0m3.014s 00:13:01.343 user 0m0.023s 00:13:01.343 sys 0m0.086s 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:01.343 ************************************ 00:13:01.343 END TEST filesystem_xfs 00:13:01.343 ************************************ 00:13:01.343 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3266175 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3266175 ']' 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3266175 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.606 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266175 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266175' 00:13:01.867 killing process with pid 3266175 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3266175 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3266175 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:01.867 00:13:01.867 real 0m16.502s 00:13:01.867 user 1m4.787s 00:13:01.867 sys 0m1.439s 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.867 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.867 ************************************ 00:13:01.867 END TEST nvmf_filesystem_no_in_capsule 00:13:01.867 ************************************ 00:13:02.129 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:02.129 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.129 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.129 12:43:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:02.129 ************************************ 00:13:02.129 START TEST nvmf_filesystem_in_capsule 00:13:02.129 ************************************ 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3269697 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3269697 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3269697 ']' 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.129 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.129 [2024-11-28 12:43:32.103469] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:02.129 [2024-11-28 12:43:32.103519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.129 [2024-11-28 12:43:32.244230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:02.389 [2024-11-28 12:43:32.299373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.389 [2024-11-28 12:43:32.323105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.389 [2024-11-28 12:43:32.323168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.389 [2024-11-28 12:43:32.323175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.389 [2024-11-28 12:43:32.323181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.390 [2024-11-28 12:43:32.323185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.390 [2024-11-28 12:43:32.325045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.390 [2024-11-28 12:43:32.325222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.390 [2024-11-28 12:43:32.325296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.390 [2024-11-28 12:43:32.325296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.959 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.959 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:02.959 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.959 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.959 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.960 [2024-11-28 12:43:32.950669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.960 12:43:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.960 Malloc1 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.960 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.220 [2024-11-28 12:43:33.087444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:03.220 { 00:13:03.220 "name": "Malloc1", 00:13:03.220 "aliases": [ 00:13:03.220 "db482984-eac0-4905-a59b-ff8c3f674383" 00:13:03.220 ], 00:13:03.220 "product_name": "Malloc disk", 00:13:03.220 "block_size": 512, 00:13:03.220 "num_blocks": 1048576, 00:13:03.220 "uuid": "db482984-eac0-4905-a59b-ff8c3f674383", 00:13:03.220 "assigned_rate_limits": { 00:13:03.220 "rw_ios_per_sec": 0, 00:13:03.220 "rw_mbytes_per_sec": 0, 00:13:03.220 "r_mbytes_per_sec": 0, 00:13:03.220 "w_mbytes_per_sec": 0 00:13:03.220 }, 00:13:03.220 "claimed": true, 00:13:03.220 "claim_type": "exclusive_write", 00:13:03.220 "zoned": false, 00:13:03.220 "supported_io_types": { 00:13:03.220 "read": true, 00:13:03.220 "write": true, 00:13:03.220 "unmap": true, 00:13:03.220 "flush": true, 00:13:03.220 "reset": true, 00:13:03.220 "nvme_admin": false, 00:13:03.220 "nvme_io": false, 00:13:03.220 "nvme_io_md": false, 00:13:03.220 "write_zeroes": true, 00:13:03.220 "zcopy": true, 00:13:03.220 "get_zone_info": false, 00:13:03.220 "zone_management": false, 00:13:03.220 "zone_append": false, 00:13:03.220 "compare": false, 00:13:03.220 "compare_and_write": false, 00:13:03.220 "abort": true, 00:13:03.220 "seek_hole": false, 00:13:03.220 "seek_data": false, 00:13:03.220 "copy": true, 00:13:03.220 "nvme_iov_md": false 00:13:03.220 }, 00:13:03.220 "memory_domains": [ 00:13:03.220 { 00:13:03.220 "dma_device_id": "system", 00:13:03.220 "dma_device_type": 1 00:13:03.220 }, 00:13:03.220 { 00:13:03.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.220 "dma_device_type": 2 00:13:03.220 } 00:13:03.220 ], 00:13:03.220 "driver_specific": {} 00:13:03.220 } 00:13:03.220 ]' 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:03.220 12:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.604 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.604 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.604 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.604 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.604 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:07.150 12:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:07.721 12:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.662 ************************************ 00:13:08.662 START TEST filesystem_in_capsule_ext4 00:13:08.662 ************************************ 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:08.662 12:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:08.662 mke2fs 1.47.0 (5-Feb-2023) 00:13:08.662 Discarding device blocks: 0/522240 done 00:13:08.662 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:08.662 Filesystem UUID: a21fa7e0-ecb3-4c9a-a0d4-83f1845aab68 00:13:08.662 Superblock backups stored on blocks: 00:13:08.662 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:08.662 00:13:08.662 Allocating group tables: 0/64 done 00:13:08.662 Writing inode tables: 0/64 done 00:13:08.923 Creating journal (8192 blocks): done 00:13:10.382 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:13:10.382 00:13:10.382 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:10.382 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3269697 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.960 00:13:16.960 real 0m7.483s 00:13:16.960 user 0m0.027s 00:13:16.960 sys 0m0.082s 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:16.960 ************************************ 00:13:16.960 END TEST filesystem_in_capsule_ext4 00:13:16.960 ************************************ 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.960 ************************************ 00:13:16.960 START TEST filesystem_in_capsule_btrfs 00:13:16.960 ************************************ 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:16.960 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:16.960 btrfs-progs v6.8.1 00:13:16.960 See https://btrfs.readthedocs.io for more information. 00:13:16.960 00:13:16.960 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:16.960 NOTE: several default settings have changed in version 5.15, please make sure 00:13:16.960 this does not affect your deployments: 00:13:16.960 - DUP for metadata (-m dup) 00:13:16.960 - enabled no-holes (-O no-holes) 00:13:16.960 - enabled free-space-tree (-R free-space-tree) 00:13:16.960 00:13:16.960 Label: (null) 00:13:16.960 UUID: 15093b78-830f-4183-876f-b14b979a5c14 00:13:16.960 Node size: 16384 00:13:16.960 Sector size: 4096 (CPU page size: 4096) 00:13:16.960 Filesystem size: 510.00MiB 00:13:16.960 Block group profiles: 00:13:16.960 Data: single 8.00MiB 00:13:16.960 Metadata: DUP 32.00MiB 00:13:16.960 System: DUP 8.00MiB 00:13:16.960 SSD detected: yes 00:13:16.960 Zoned device: no 00:13:16.961 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:16.961 Checksum: crc32c 00:13:16.961 Number of devices: 1 00:13:16.961 Devices: 00:13:16.961 ID SIZE PATH 00:13:16.961 1 510.00MiB /dev/nvme0n1p1 00:13:16.961 00:13:16.961 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:16.961 12:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.221 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.221 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:17.221 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.221 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3269697 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.482 00:13:17.482 real 0m1.184s 00:13:17.482 user 0m0.029s 00:13:17.482 sys 0m0.118s 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:17.482 ************************************ 00:13:17.482 END TEST filesystem_in_capsule_btrfs 00:13:17.482 ************************************ 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.482 ************************************ 00:13:17.482 START TEST filesystem_in_capsule_xfs 00:13:17.482 ************************************ 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:17.482 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:17.482 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:17.482 = sectsz=512 attr=2, projid32bit=1 00:13:17.482 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:17.482 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:17.482 data = bsize=4096 blocks=130560, imaxpct=25 00:13:17.482 = sunit=0 swidth=0 blks 00:13:17.482 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:17.482 log =internal log bsize=4096 blocks=16384, version=2 00:13:17.482 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:17.482 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:18.868 Discarding blocks...Done. 00:13:18.868 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:18.868 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3269697 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:20.779 00:13:20.779 real 0m3.118s 00:13:20.779 user 0m0.033s 00:13:20.779 sys 0m0.075s 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:20.779 ************************************ 00:13:20.779 END TEST filesystem_in_capsule_xfs 00:13:20.779 ************************************ 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3269697 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3269697 ']' 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3269697 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3269697 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3269697' 00:13:20.779 killing process with pid 3269697 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3269697 00:13:20.779 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3269697 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:21.040 00:13:21.040 real 0m19.039s 00:13:21.040 user 1m14.977s 00:13:21.040 sys 0m1.404s 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.040 ************************************ 00:13:21.040 END TEST nvmf_filesystem_in_capsule 00:13:21.040 ************************************ 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.040 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.040 rmmod nvme_tcp 00:13:21.040 rmmod nvme_fabrics 00:13:21.300 rmmod nvme_keyring 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.300 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.212 00:13:23.212 real 0m45.992s 00:13:23.212 user 2m22.190s 00:13:23.212 sys 0m8.804s 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:23.212 ************************************ 00:13:23.212 END TEST nvmf_filesystem 00:13:23.212 ************************************ 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.212 12:43:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.473 ************************************ 00:13:23.473 START TEST nvmf_target_discovery 00:13:23.473 ************************************ 00:13:23.473 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:23.474 * Looking for test storage... 00:13:23.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.474 --rc genhtml_branch_coverage=1 00:13:23.474 --rc genhtml_function_coverage=1 00:13:23.474 --rc genhtml_legend=1 00:13:23.474 --rc geninfo_all_blocks=1 00:13:23.474 --rc geninfo_unexecuted_blocks=1 00:13:23.474 00:13:23.474 ' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.474 --rc genhtml_branch_coverage=1 00:13:23.474 --rc genhtml_function_coverage=1 00:13:23.474 --rc genhtml_legend=1 00:13:23.474 --rc geninfo_all_blocks=1 00:13:23.474 --rc geninfo_unexecuted_blocks=1 00:13:23.474 00:13:23.474 ' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.474 --rc genhtml_branch_coverage=1 00:13:23.474 --rc genhtml_function_coverage=1 00:13:23.474 --rc genhtml_legend=1 00:13:23.474 --rc geninfo_all_blocks=1 00:13:23.474 --rc geninfo_unexecuted_blocks=1 00:13:23.474 00:13:23.474 ' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.474 --rc genhtml_branch_coverage=1 00:13:23.474 --rc genhtml_function_coverage=1 00:13:23.474 --rc genhtml_legend=1 00:13:23.474 --rc geninfo_all_blocks=1 00:13:23.474 --rc geninfo_unexecuted_blocks=1 00:13:23.474 00:13:23.474 ' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.474 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:23.475 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:23.735 12:43:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:31.878 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:31.878 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:31.878 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:31.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:31.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:31.879 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:31.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:13:31.879 00:13:31.879 --- 10.0.0.2 ping statistics --- 00:13:31.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.879 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:13:31.879 00:13:31.879 --- 10.0.0.1 ping statistics --- 00:13:31.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.879 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3277682 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3277682 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3277682 ']' 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.879 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.879 [2024-11-28 12:44:01.245484] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:31.879 [2024-11-28 12:44:01.245556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.879 [2024-11-28 12:44:01.390513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.879 [2024-11-28 12:44:01.435902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.879 [2024-11-28 12:44:01.464404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.879 [2024-11-28 12:44:01.464449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.879 [2024-11-28 12:44:01.464458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.879 [2024-11-28 12:44:01.464466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.879 [2024-11-28 12:44:01.464472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.879 [2024-11-28 12:44:01.466691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.879 [2024-11-28 12:44:01.466850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.879 [2024-11-28 12:44:01.467000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.879 [2024-11-28 12:44:01.467001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 [2024-11-28 12:44:02.119140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 Null1 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 [2024-11-28 12:44:02.190438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 Null2 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 Null3 00:13:32.141 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.142 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:32.142 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.142 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 Null4 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.404 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:32.666 00:13:32.666 Discovery Log Number of Records 6, Generation counter 6 00:13:32.666 =====Discovery Log Entry 0====== 00:13:32.666 trtype: tcp 00:13:32.666 adrfam: ipv4 00:13:32.666 subtype: current discovery subsystem 00:13:32.666 treq: not required 00:13:32.666 portid: 0 00:13:32.666 trsvcid: 4420 00:13:32.666 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:32.666 traddr: 10.0.0.2 00:13:32.666 eflags: explicit discovery connections, duplicate discovery information 00:13:32.666 sectype: none 00:13:32.666 =====Discovery Log Entry 1====== 00:13:32.666 trtype: tcp 00:13:32.666 adrfam: ipv4 00:13:32.666 subtype: nvme subsystem 00:13:32.666 treq: not required 00:13:32.666 portid: 0 00:13:32.666 trsvcid: 4420 00:13:32.666 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:32.666 traddr: 10.0.0.2 00:13:32.666 eflags: none 00:13:32.666 sectype: none 00:13:32.666 =====Discovery Log Entry 2====== 00:13:32.666 trtype: tcp 00:13:32.666 adrfam: ipv4 00:13:32.666 subtype: nvme subsystem 00:13:32.666 treq: not required 00:13:32.666 portid: 0 00:13:32.666 trsvcid: 4420 00:13:32.666 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:32.666 traddr: 10.0.0.2 00:13:32.666 eflags: none 00:13:32.666 sectype: none 00:13:32.666 =====Discovery Log Entry 3====== 00:13:32.666 trtype: tcp 00:13:32.666 adrfam: ipv4 00:13:32.666 subtype: nvme subsystem 00:13:32.666 treq: not required 00:13:32.666 portid: 0 00:13:32.666 trsvcid: 4420 00:13:32.666 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:32.666 traddr: 10.0.0.2 00:13:32.666 eflags: none 00:13:32.666 sectype: none 00:13:32.666 =====Discovery Log Entry 4====== 00:13:32.666 trtype: tcp 00:13:32.666 adrfam: ipv4 00:13:32.666 subtype: nvme subsystem 00:13:32.666 treq: not required 00:13:32.666 portid: 0 00:13:32.666 trsvcid: 4420 00:13:32.666 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:32.666 traddr: 10.0.0.2 00:13:32.666 eflags: none 00:13:32.666 sectype: none 00:13:32.666 =====Discovery Log Entry 5====== 00:13:32.666 trtype: tcp 00:13:32.666 adrfam: ipv4 00:13:32.666 subtype: discovery subsystem referral 00:13:32.666 treq: not required 00:13:32.666 portid: 0 00:13:32.666 trsvcid: 4430 00:13:32.666 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:32.666 traddr: 10.0.0.2 00:13:32.666 eflags: none 00:13:32.666 sectype: none 00:13:32.666 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:32.666 Perform nvmf subsystem discovery via RPC 00:13:32.666 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:32.666 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.666 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.666 [ 00:13:32.666 { 00:13:32.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:32.666 "subtype": "Discovery", 00:13:32.666 "listen_addresses": [ 00:13:32.666 { 00:13:32.666 "trtype": "TCP", 00:13:32.666 "adrfam": "IPv4", 00:13:32.666 "traddr": "10.0.0.2", 00:13:32.666 "trsvcid": "4420" 00:13:32.666 } 00:13:32.666 ], 00:13:32.666 "allow_any_host": true, 00:13:32.666 "hosts": [] 00:13:32.666 }, 00:13:32.666 { 00:13:32.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.666 "subtype": "NVMe", 00:13:32.666 "listen_addresses": [ 00:13:32.666 { 00:13:32.666 "trtype": "TCP", 00:13:32.666 "adrfam": "IPv4", 00:13:32.666 "traddr": "10.0.0.2", 00:13:32.666 "trsvcid": "4420" 00:13:32.666 } 00:13:32.666 ], 00:13:32.666 "allow_any_host": true, 00:13:32.666 "hosts": [], 00:13:32.666 "serial_number": "SPDK00000000000001", 00:13:32.666 "model_number": "SPDK bdev Controller", 00:13:32.666 "max_namespaces": 32, 00:13:32.666 "min_cntlid": 1, 00:13:32.666 "max_cntlid": 65519, 00:13:32.666 "namespaces": [ 00:13:32.666 { 00:13:32.666 "nsid": 1, 00:13:32.666 "bdev_name": "Null1", 00:13:32.666 "name": "Null1", 00:13:32.666 "nguid": "A2E698CCC1C1428FBF51A1BAEC1C0681", 00:13:32.667 "uuid": "a2e698cc-c1c1-428f-bf51-a1baec1c0681" 00:13:32.667 } 00:13:32.667 ] 00:13:32.667 }, 00:13:32.667 { 00:13:32.667 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:32.667 "subtype": "NVMe", 00:13:32.667 "listen_addresses": [ 00:13:32.667 { 00:13:32.667 "trtype": "TCP", 00:13:32.667 "adrfam": "IPv4", 00:13:32.667 "traddr": "10.0.0.2", 00:13:32.667 "trsvcid": "4420" 00:13:32.667 } 00:13:32.667 ], 00:13:32.667 "allow_any_host": true, 00:13:32.667 "hosts": [], 00:13:32.667 "serial_number": "SPDK00000000000002", 00:13:32.667 "model_number": "SPDK bdev Controller", 00:13:32.667 "max_namespaces": 32, 00:13:32.667 "min_cntlid": 1, 00:13:32.667 "max_cntlid": 65519, 00:13:32.667 "namespaces": [ 00:13:32.667 { 00:13:32.667 "nsid": 1, 00:13:32.667 "bdev_name": "Null2", 00:13:32.667 "name": "Null2", 00:13:32.667 "nguid": "3B9ABC72715A496FB736BD0A1D38D3ED", 00:13:32.667 "uuid": "3b9abc72-715a-496f-b736-bd0a1d38d3ed" 00:13:32.667 } 00:13:32.667 ] 00:13:32.667 }, 00:13:32.667 { 00:13:32.667 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:32.667 "subtype": "NVMe", 00:13:32.667 "listen_addresses": [ 00:13:32.667 { 00:13:32.667 "trtype": "TCP", 00:13:32.667 "adrfam": "IPv4", 00:13:32.667 "traddr": "10.0.0.2", 00:13:32.667 "trsvcid": "4420" 00:13:32.667 } 00:13:32.667 ], 00:13:32.667 "allow_any_host": true, 00:13:32.667 "hosts": [], 00:13:32.667 "serial_number": "SPDK00000000000003", 00:13:32.667 "model_number": "SPDK bdev Controller", 00:13:32.667 "max_namespaces": 32, 00:13:32.667 "min_cntlid": 1, 00:13:32.667 "max_cntlid": 65519, 00:13:32.667 "namespaces": [ 00:13:32.667 { 00:13:32.667 "nsid": 1, 00:13:32.667 "bdev_name": "Null3", 00:13:32.667 "name": "Null3", 00:13:32.667 "nguid": "BF17F7D0C7DE4B98B4973680CE0FE344", 00:13:32.667 "uuid": "bf17f7d0-c7de-4b98-b497-3680ce0fe344" 00:13:32.667 } 00:13:32.667 ] 00:13:32.667 }, 00:13:32.667 { 00:13:32.667 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:32.667 "subtype": "NVMe", 00:13:32.667 "listen_addresses": [ 00:13:32.667 { 00:13:32.667 "trtype": "TCP", 00:13:32.667 "adrfam": "IPv4", 00:13:32.667 "traddr": "10.0.0.2", 00:13:32.667 "trsvcid": "4420" 00:13:32.667 } 00:13:32.667 ], 00:13:32.667 "allow_any_host": true, 00:13:32.667 "hosts": [], 00:13:32.667 "serial_number": "SPDK00000000000004", 00:13:32.667 "model_number": "SPDK bdev Controller", 00:13:32.667 "max_namespaces": 32, 00:13:32.667 "min_cntlid": 1, 00:13:32.667 "max_cntlid": 65519, 00:13:32.667 "namespaces": [ 00:13:32.667 { 00:13:32.667 "nsid": 1, 00:13:32.667 "bdev_name": "Null4", 00:13:32.667 "name": "Null4", 00:13:32.667 "nguid": "5499A3440C414A3FBE7F31A6F19A4829", 00:13:32.667 "uuid": "5499a344-0c41-4a3f-be7f-31a6f19a4829" 00:13:32.667 } 00:13:32.667 ] 00:13:32.667 } 00:13:32.667 ] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.667 rmmod nvme_tcp 00:13:32.667 rmmod nvme_fabrics 00:13:32.667 rmmod nvme_keyring 00:13:32.667 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.928 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:32.928 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:32.928 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3277682 ']' 00:13:32.928 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3277682 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3277682 ']' 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3277682 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277682 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277682' 00:13:32.929 killing process with pid 3277682 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3277682 00:13:32.929 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3277682 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.929 12:44:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.474 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.475 00:13:35.475 real 0m11.739s 00:13:35.475 user 0m8.628s 00:13:35.475 sys 0m6.147s 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.475 ************************************ 00:13:35.475 END TEST nvmf_target_discovery 00:13:35.475 ************************************ 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.475 ************************************ 00:13:35.475 START TEST nvmf_referrals 00:13:35.475 ************************************ 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:35.475 * Looking for test storage... 00:13:35.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:35.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.475 --rc genhtml_branch_coverage=1 00:13:35.475 --rc genhtml_function_coverage=1 00:13:35.475 --rc genhtml_legend=1 00:13:35.475 --rc geninfo_all_blocks=1 00:13:35.475 --rc geninfo_unexecuted_blocks=1 00:13:35.475 00:13:35.475 ' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:35.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.475 --rc genhtml_branch_coverage=1 00:13:35.475 --rc genhtml_function_coverage=1 00:13:35.475 --rc genhtml_legend=1 00:13:35.475 --rc geninfo_all_blocks=1 00:13:35.475 --rc geninfo_unexecuted_blocks=1 00:13:35.475 00:13:35.475 ' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:35.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.475 --rc genhtml_branch_coverage=1 00:13:35.475 --rc genhtml_function_coverage=1 00:13:35.475 --rc genhtml_legend=1 00:13:35.475 --rc geninfo_all_blocks=1 00:13:35.475 --rc geninfo_unexecuted_blocks=1 00:13:35.475 00:13:35.475 ' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:35.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.475 --rc genhtml_branch_coverage=1 00:13:35.475 --rc genhtml_function_coverage=1 00:13:35.475 --rc genhtml_legend=1 00:13:35.475 --rc geninfo_all_blocks=1 00:13:35.475 --rc geninfo_unexecuted_blocks=1 00:13:35.475 00:13:35.475 ' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:35.475 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.476 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:43.616 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:43.616 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:43.616 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:43.616 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.616 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:13:43.617 00:13:43.617 --- 10.0.0.2 ping statistics --- 00:13:43.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.617 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:13:43.617 00:13:43.617 --- 10.0.0.1 ping statistics --- 00:13:43.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.617 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.617 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3282368 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3282368 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3282368 ']' 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.617 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.617 [2024-11-28 12:44:13.096751] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:43.617 [2024-11-28 12:44:13.096817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.617 [2024-11-28 12:44:13.241405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.617 [2024-11-28 12:44:13.300240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.617 [2024-11-28 12:44:13.328173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.617 [2024-11-28 12:44:13.328216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.617 [2024-11-28 12:44:13.328224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.617 [2024-11-28 12:44:13.328231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.617 [2024-11-28 12:44:13.328237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.617 [2024-11-28 12:44:13.330080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.617 [2024-11-28 12:44:13.330243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.617 [2024-11-28 12:44:13.330298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.617 [2024-11-28 12:44:13.330298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.879 [2024-11-28 12:44:13.982166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.879 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 [2024-11-28 12:44:14.013436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:44.141 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:44.402 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:44.665 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.930 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:44.930 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:44.930 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:44.930 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:44.930 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:44.930 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.930 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:45.327 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:45.587 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:45.847 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.108 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.108 rmmod nvme_tcp 00:13:46.109 rmmod nvme_fabrics 00:13:46.109 rmmod nvme_keyring 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3282368 ']' 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3282368 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3282368 ']' 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3282368 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282368 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282368' 00:13:46.109 killing process with pid 3282368 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3282368 00:13:46.109 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3282368 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.369 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.284 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.284 00:13:48.284 real 0m13.212s 00:13:48.284 user 0m15.104s 00:13:48.284 sys 0m6.598s 00:13:48.284 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.284 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.284 ************************************ 00:13:48.284 END TEST nvmf_referrals 00:13:48.284 ************************************ 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.545 ************************************ 00:13:48.545 START TEST nvmf_connect_disconnect 00:13:48.545 ************************************ 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:48.545 * Looking for test storage... 00:13:48.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.545 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.808 --rc genhtml_branch_coverage=1 00:13:48.808 --rc genhtml_function_coverage=1 00:13:48.808 --rc genhtml_legend=1 00:13:48.808 --rc geninfo_all_blocks=1 00:13:48.808 --rc geninfo_unexecuted_blocks=1 00:13:48.808 00:13:48.808 ' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.808 --rc genhtml_branch_coverage=1 00:13:48.808 --rc genhtml_function_coverage=1 00:13:48.808 --rc genhtml_legend=1 00:13:48.808 --rc geninfo_all_blocks=1 00:13:48.808 --rc geninfo_unexecuted_blocks=1 00:13:48.808 00:13:48.808 ' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.808 --rc genhtml_branch_coverage=1 00:13:48.808 --rc genhtml_function_coverage=1 00:13:48.808 --rc genhtml_legend=1 00:13:48.808 --rc geninfo_all_blocks=1 00:13:48.808 --rc geninfo_unexecuted_blocks=1 00:13:48.808 00:13:48.808 ' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.808 --rc genhtml_branch_coverage=1 00:13:48.808 --rc genhtml_function_coverage=1 00:13:48.808 --rc genhtml_legend=1 00:13:48.808 --rc geninfo_all_blocks=1 00:13:48.808 --rc geninfo_unexecuted_blocks=1 00:13:48.808 00:13:48.808 ' 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.808 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.809 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:56.959 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:56.959 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:56.959 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:56.959 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.959 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.959 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:13:56.960 00:13:56.960 --- 10.0.0.2 ping statistics --- 00:13:56.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.960 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:13:56.960 00:13:56.960 --- 10.0.0.1 ping statistics --- 00:13:56.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.960 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3287166 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3287166 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3287166 ']' 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.960 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.960 [2024-11-28 12:44:26.386188] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:56.960 [2024-11-28 12:44:26.386258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.960 [2024-11-28 12:44:26.530942] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:56.960 [2024-11-28 12:44:26.588401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.960 [2024-11-28 12:44:26.616522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.960 [2024-11-28 12:44:26.616565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.960 [2024-11-28 12:44:26.616574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.960 [2024-11-28 12:44:26.616581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.960 [2024-11-28 12:44:26.616588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.960 [2024-11-28 12:44:26.618720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.960 [2024-11-28 12:44:26.618881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.960 [2024-11-28 12:44:26.619043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.960 [2024-11-28 12:44:26.619044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 [2024-11-28 12:44:27.266577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 [2024-11-28 12:44:27.342335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:57.222 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:59.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.220 rmmod nvme_tcp 00:17:52.220 rmmod nvme_fabrics 00:17:52.220 rmmod nvme_keyring 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3287166 ']' 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3287166 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3287166 ']' 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3287166 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.220 12:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287166 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287166' 00:17:52.220 killing process with pid 3287166 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3287166 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3287166 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.220 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:54.133 00:17:54.133 real 4m5.711s 00:17:54.133 user 15m34.390s 00:17:54.133 sys 0m25.897s 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:54.133 ************************************ 00:17:54.133 END TEST nvmf_connect_disconnect 00:17:54.133 ************************************ 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.133 12:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.395 ************************************ 00:17:54.395 START TEST nvmf_multitarget 00:17:54.395 ************************************ 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:54.395 * Looking for test storage... 00:17:54.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.395 --rc genhtml_branch_coverage=1 00:17:54.395 --rc genhtml_function_coverage=1 00:17:54.395 --rc genhtml_legend=1 00:17:54.395 --rc geninfo_all_blocks=1 00:17:54.395 --rc geninfo_unexecuted_blocks=1 00:17:54.395 00:17:54.395 ' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.395 --rc genhtml_branch_coverage=1 00:17:54.395 --rc genhtml_function_coverage=1 00:17:54.395 --rc genhtml_legend=1 00:17:54.395 --rc geninfo_all_blocks=1 00:17:54.395 --rc geninfo_unexecuted_blocks=1 00:17:54.395 00:17:54.395 ' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.395 --rc genhtml_branch_coverage=1 00:17:54.395 --rc genhtml_function_coverage=1 00:17:54.395 --rc genhtml_legend=1 00:17:54.395 --rc geninfo_all_blocks=1 00:17:54.395 --rc geninfo_unexecuted_blocks=1 00:17:54.395 00:17:54.395 ' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.395 --rc genhtml_branch_coverage=1 00:17:54.395 --rc genhtml_function_coverage=1 00:17:54.395 --rc genhtml_legend=1 00:17:54.395 --rc geninfo_all_blocks=1 00:17:54.395 --rc geninfo_unexecuted_blocks=1 00:17:54.395 00:17:54.395 ' 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.395 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.396 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:54.657 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:02.794 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:02.794 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:02.794 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:02.794 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.794 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:02.795 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:02.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:18:02.795 00:18:02.795 --- 10.0.0.2 ping statistics --- 00:18:02.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.795 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:18:02.795 00:18:02.795 --- 10.0.0.1 ping statistics --- 00:18:02.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.795 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3339365 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3339365 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3339365 ']' 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.795 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:02.795 [2024-11-28 12:48:32.128884] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:02.795 [2024-11-28 12:48:32.128947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.795 [2024-11-28 12:48:32.273276] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:02.795 [2024-11-28 12:48:32.332870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.795 [2024-11-28 12:48:32.360860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.795 [2024-11-28 12:48:32.360909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.795 [2024-11-28 12:48:32.360918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.795 [2024-11-28 12:48:32.360924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.795 [2024-11-28 12:48:32.360930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.795 [2024-11-28 12:48:32.362811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.795 [2024-11-28 12:48:32.362968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.795 [2024-11-28 12:48:32.363128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.795 [2024-11-28 12:48:32.363129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.058 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:03.058 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:03.058 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:03.058 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:03.058 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:03.319 "nvmf_tgt_1" 00:18:03.319 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:03.319 "nvmf_tgt_2" 00:18:03.319 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:03.319 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:03.579 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:03.579 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:03.580 true 00:18:03.580 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:03.580 true 00:18:03.580 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:03.580 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.841 rmmod nvme_tcp 00:18:03.841 rmmod nvme_fabrics 00:18:03.841 rmmod nvme_keyring 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3339365 ']' 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3339365 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3339365 ']' 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3339365 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339365 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339365' 00:18:03.841 killing process with pid 3339365 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3339365 00:18:03.841 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3339365 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.102 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:06.650 00:18:06.650 real 0m11.890s 00:18:06.650 user 0m9.958s 00:18:06.650 sys 0m6.179s 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:06.650 ************************************ 00:18:06.650 END TEST nvmf_multitarget 00:18:06.650 ************************************ 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.650 ************************************ 00:18:06.650 START TEST nvmf_rpc 00:18:06.650 ************************************ 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:06.650 * Looking for test storage... 00:18:06.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.650 --rc genhtml_branch_coverage=1 00:18:06.650 --rc genhtml_function_coverage=1 00:18:06.650 --rc genhtml_legend=1 00:18:06.650 --rc geninfo_all_blocks=1 00:18:06.650 --rc geninfo_unexecuted_blocks=1 00:18:06.650 00:18:06.650 ' 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.650 --rc genhtml_branch_coverage=1 00:18:06.650 --rc genhtml_function_coverage=1 00:18:06.650 --rc genhtml_legend=1 00:18:06.650 --rc geninfo_all_blocks=1 00:18:06.650 --rc geninfo_unexecuted_blocks=1 00:18:06.650 00:18:06.650 ' 00:18:06.650 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.650 --rc genhtml_branch_coverage=1 00:18:06.650 --rc genhtml_function_coverage=1 00:18:06.650 --rc genhtml_legend=1 00:18:06.651 --rc geninfo_all_blocks=1 00:18:06.651 --rc geninfo_unexecuted_blocks=1 00:18:06.651 00:18:06.651 ' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:06.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.651 --rc genhtml_branch_coverage=1 00:18:06.651 --rc genhtml_function_coverage=1 00:18:06.651 --rc genhtml_legend=1 00:18:06.651 --rc geninfo_all_blocks=1 00:18:06.651 --rc geninfo_unexecuted_blocks=1 00:18:06.651 00:18:06.651 ' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.651 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:14.795 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:14.795 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:14.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:14.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:14.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:14.796 12:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:14.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:18:14.796 00:18:14.796 --- 10.0.0.2 ping statistics --- 00:18:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.796 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:14.796 00:18:14.796 --- 10.0.0.1 ping statistics --- 00:18:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.796 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3344020 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3344020 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3344020 ']' 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.796 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 [2024-11-28 12:48:44.157263] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:14.796 [2024-11-28 12:48:44.157331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.796 [2024-11-28 12:48:44.302034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:14.796 [2024-11-28 12:48:44.362247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.796 [2024-11-28 12:48:44.390573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.796 [2024-11-28 12:48:44.390620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.796 [2024-11-28 12:48:44.390628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.796 [2024-11-28 12:48:44.390635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.796 [2024-11-28 12:48:44.390641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.796 [2024-11-28 12:48:44.392564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.796 [2024-11-28 12:48:44.392726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.796 [2024-11-28 12:48:44.392887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.796 [2024-11-28 12:48:44.392887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.059 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.059 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:15.059 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.059 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.059 12:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.059 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:15.059 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.059 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.059 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:15.059 "tick_rate": 2394400000, 00:18:15.059 "poll_groups": [ 00:18:15.059 { 00:18:15.059 "name": "nvmf_tgt_poll_group_000", 00:18:15.059 "admin_qpairs": 0, 00:18:15.059 "io_qpairs": 0, 00:18:15.059 "current_admin_qpairs": 0, 00:18:15.059 "current_io_qpairs": 0, 00:18:15.059 "pending_bdev_io": 0, 00:18:15.059 "completed_nvme_io": 0, 00:18:15.059 "transports": [] 00:18:15.059 }, 00:18:15.059 { 00:18:15.059 "name": "nvmf_tgt_poll_group_001", 00:18:15.059 "admin_qpairs": 0, 00:18:15.059 "io_qpairs": 0, 00:18:15.059 "current_admin_qpairs": 0, 00:18:15.059 "current_io_qpairs": 0, 00:18:15.059 "pending_bdev_io": 0, 00:18:15.059 "completed_nvme_io": 0, 00:18:15.059 "transports": [] 00:18:15.059 }, 00:18:15.059 { 00:18:15.059 "name": "nvmf_tgt_poll_group_002", 00:18:15.059 "admin_qpairs": 0, 00:18:15.059 "io_qpairs": 0, 00:18:15.059 "current_admin_qpairs": 0, 00:18:15.059 "current_io_qpairs": 0, 00:18:15.059 "pending_bdev_io": 0, 00:18:15.059 "completed_nvme_io": 0, 00:18:15.059 "transports": [] 00:18:15.059 }, 00:18:15.059 { 00:18:15.060 "name": "nvmf_tgt_poll_group_003", 00:18:15.060 "admin_qpairs": 0, 00:18:15.060 "io_qpairs": 0, 00:18:15.060 "current_admin_qpairs": 0, 00:18:15.060 "current_io_qpairs": 0, 00:18:15.060 "pending_bdev_io": 0, 00:18:15.060 "completed_nvme_io": 0, 00:18:15.060 "transports": [] 00:18:15.060 } 00:18:15.060 ] 00:18:15.060 }' 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.060 [2024-11-28 12:48:45.152183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.060 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:15.060 "tick_rate": 2394400000, 00:18:15.060 "poll_groups": [ 00:18:15.060 { 00:18:15.060 "name": "nvmf_tgt_poll_group_000", 00:18:15.060 "admin_qpairs": 0, 00:18:15.060 "io_qpairs": 0, 00:18:15.060 "current_admin_qpairs": 0, 00:18:15.060 "current_io_qpairs": 0, 00:18:15.060 "pending_bdev_io": 0, 00:18:15.060 "completed_nvme_io": 0, 00:18:15.060 "transports": [ 00:18:15.060 { 00:18:15.060 "trtype": "TCP" 00:18:15.060 } 00:18:15.060 ] 00:18:15.060 }, 00:18:15.060 { 00:18:15.060 "name": "nvmf_tgt_poll_group_001", 00:18:15.060 "admin_qpairs": 0, 00:18:15.060 "io_qpairs": 0, 00:18:15.060 "current_admin_qpairs": 0, 00:18:15.060 "current_io_qpairs": 0, 00:18:15.060 "pending_bdev_io": 0, 00:18:15.060 "completed_nvme_io": 0, 00:18:15.060 "transports": [ 00:18:15.060 { 00:18:15.060 "trtype": "TCP" 00:18:15.060 } 00:18:15.060 ] 00:18:15.060 }, 00:18:15.060 { 00:18:15.060 "name": "nvmf_tgt_poll_group_002", 00:18:15.060 "admin_qpairs": 0, 00:18:15.060 "io_qpairs": 0, 00:18:15.060 "current_admin_qpairs": 0, 00:18:15.060 "current_io_qpairs": 0, 00:18:15.060 "pending_bdev_io": 0, 00:18:15.060 "completed_nvme_io": 0, 00:18:15.060 "transports": [ 00:18:15.060 { 00:18:15.060 "trtype": "TCP" 00:18:15.060 } 00:18:15.060 ] 00:18:15.060 }, 00:18:15.060 { 00:18:15.060 "name": "nvmf_tgt_poll_group_003", 00:18:15.060 "admin_qpairs": 0, 00:18:15.060 "io_qpairs": 0, 00:18:15.060 "current_admin_qpairs": 0, 00:18:15.060 "current_io_qpairs": 0, 00:18:15.060 "pending_bdev_io": 0, 00:18:15.060 "completed_nvme_io": 0, 00:18:15.060 "transports": [ 00:18:15.060 { 00:18:15.060 "trtype": "TCP" 00:18:15.060 } 00:18:15.060 ] 00:18:15.060 } 00:18:15.060 ] 00:18:15.060 }' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 Malloc1 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 [2024-11-28 12:48:45.362934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.329 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:15.330 [2024-11-28 12:48:45.399833] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:18:15.330 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:15.330 could not add new controller: failed to write to nvme-fabrics device 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.330 12:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.246 12:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.246 12:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:17.246 12:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.246 12:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:17.246 12:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.162 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:19.163 [2024-11-28 12:48:49.164906] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:18:19.163 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:19.163 could not add new controller: failed to write to nvme-fabrics device 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.163 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.082 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.082 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.082 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.082 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:21.082 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.998 [2024-11-28 12:48:52.925300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.998 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:24.384 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:24.384 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:24.384 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.384 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:24.384 12:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:26.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.930 [2024-11-28 12:48:56.668329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.930 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:28.315 12:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:28.315 12:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:28.315 12:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.315 12:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:28.315 12:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:30.228 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 [2024-11-28 12:49:00.424005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.489 12:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:31.873 12:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:31.873 12:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:31.873 12:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.873 12:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:31.873 12:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:34.419 12:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:34.419 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 [2024-11-28 12:49:04.136866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.420 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:35.806 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:35.806 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:35.806 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:35.806 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:35.806 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:37.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.720 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.982 [2024-11-28 12:49:07.856791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.982 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:39.371 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:39.371 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:39.371 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.371 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:39.371 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:41.292 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:41.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.554 [2024-11-28 12:49:11.591175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.554 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 [2024-11-28 12:49:11.659180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.555 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 [2024-11-28 12:49:11.731216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 [2024-11-28 12:49:11.803278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 [2024-11-28 12:49:11.867331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.818 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:41.818 "tick_rate": 2394400000, 00:18:41.818 "poll_groups": [ 00:18:41.818 { 00:18:41.818 "name": "nvmf_tgt_poll_group_000", 00:18:41.818 "admin_qpairs": 0, 00:18:41.818 "io_qpairs": 224, 00:18:41.818 "current_admin_qpairs": 0, 00:18:41.818 "current_io_qpairs": 0, 00:18:41.818 "pending_bdev_io": 0, 00:18:41.818 "completed_nvme_io": 274, 00:18:41.818 "transports": [ 00:18:41.818 { 00:18:41.818 "trtype": "TCP" 00:18:41.818 } 00:18:41.818 ] 00:18:41.818 }, 00:18:41.818 { 00:18:41.818 "name": "nvmf_tgt_poll_group_001", 00:18:41.818 "admin_qpairs": 1, 00:18:41.818 "io_qpairs": 223, 00:18:41.818 "current_admin_qpairs": 0, 00:18:41.818 "current_io_qpairs": 0, 00:18:41.818 "pending_bdev_io": 0, 00:18:41.818 "completed_nvme_io": 490, 00:18:41.818 "transports": [ 00:18:41.818 { 00:18:41.818 "trtype": "TCP" 00:18:41.818 } 00:18:41.818 ] 00:18:41.818 }, 00:18:41.818 { 00:18:41.818 "name": "nvmf_tgt_poll_group_002", 00:18:41.818 "admin_qpairs": 6, 00:18:41.818 "io_qpairs": 218, 00:18:41.818 "current_admin_qpairs": 0, 00:18:41.818 "current_io_qpairs": 0, 00:18:41.818 "pending_bdev_io": 0, 00:18:41.818 "completed_nvme_io": 220, 00:18:41.818 "transports": [ 00:18:41.818 { 00:18:41.818 "trtype": "TCP" 00:18:41.818 } 00:18:41.818 ] 00:18:41.818 }, 00:18:41.818 { 00:18:41.818 "name": "nvmf_tgt_poll_group_003", 00:18:41.818 "admin_qpairs": 0, 00:18:41.818 "io_qpairs": 224, 00:18:41.818 "current_admin_qpairs": 0, 00:18:41.818 "current_io_qpairs": 0, 00:18:41.818 "pending_bdev_io": 0, 00:18:41.818 "completed_nvme_io": 255, 00:18:41.818 "transports": [ 00:18:41.818 { 00:18:41.818 "trtype": "TCP" 00:18:41.818 } 00:18:41.818 ] 00:18:41.818 } 00:18:41.818 ] 00:18:41.818 }' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:42.079 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.079 rmmod nvme_tcp 00:18:42.079 rmmod nvme_fabrics 00:18:42.079 rmmod nvme_keyring 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3344020 ']' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3344020 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3344020 ']' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3344020 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344020 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344020' 00:18:42.079 killing process with pid 3344020 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3344020 00:18:42.079 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3344020 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.341 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.257 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.257 00:18:44.258 real 0m38.105s 00:18:44.258 user 1m53.533s 00:18:44.258 sys 0m8.002s 00:18:44.258 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.258 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:44.258 ************************************ 00:18:44.258 END TEST nvmf_rpc 00:18:44.258 ************************************ 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.519 ************************************ 00:18:44.519 START TEST nvmf_invalid 00:18:44.519 ************************************ 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:44.519 * Looking for test storage... 00:18:44.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.519 --rc genhtml_branch_coverage=1 00:18:44.519 --rc genhtml_function_coverage=1 00:18:44.519 --rc genhtml_legend=1 00:18:44.519 --rc geninfo_all_blocks=1 00:18:44.519 --rc geninfo_unexecuted_blocks=1 00:18:44.519 00:18:44.519 ' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.519 --rc genhtml_branch_coverage=1 00:18:44.519 --rc genhtml_function_coverage=1 00:18:44.519 --rc genhtml_legend=1 00:18:44.519 --rc geninfo_all_blocks=1 00:18:44.519 --rc geninfo_unexecuted_blocks=1 00:18:44.519 00:18:44.519 ' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.519 --rc genhtml_branch_coverage=1 00:18:44.519 --rc genhtml_function_coverage=1 00:18:44.519 --rc genhtml_legend=1 00:18:44.519 --rc geninfo_all_blocks=1 00:18:44.519 --rc geninfo_unexecuted_blocks=1 00:18:44.519 00:18:44.519 ' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.519 --rc genhtml_branch_coverage=1 00:18:44.519 --rc genhtml_function_coverage=1 00:18:44.519 --rc genhtml_legend=1 00:18:44.519 --rc geninfo_all_blocks=1 00:18:44.519 --rc geninfo_unexecuted_blocks=1 00:18:44.519 00:18:44.519 ' 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.519 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.781 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.013 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:53.014 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:53.014 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:53.014 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:53.014 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.014 12:49:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:18:53.014 00:18:53.014 --- 10.0.0.2 ping statistics --- 00:18:53.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.014 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:18:53.014 00:18:53.014 --- 10.0.0.1 ping statistics --- 00:18:53.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.014 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3353609 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3353609 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3353609 ']' 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.014 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.015 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.015 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.015 12:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:53.015 [2024-11-28 12:49:22.312414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:53.015 [2024-11-28 12:49:22.312483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.015 [2024-11-28 12:49:22.457955] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:53.015 [2024-11-28 12:49:22.517366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.015 [2024-11-28 12:49:22.545672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.015 [2024-11-28 12:49:22.545714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.015 [2024-11-28 12:49:22.545722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.015 [2024-11-28 12:49:22.545729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.015 [2024-11-28 12:49:22.545735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.015 [2024-11-28 12:49:22.547669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.015 [2024-11-28 12:49:22.547858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.015 [2024-11-28 12:49:22.548015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.015 [2024-11-28 12:49:22.548015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21190 00:18:53.276 [2024-11-28 12:49:23.357815] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:53.276 { 00:18:53.276 "nqn": "nqn.2016-06.io.spdk:cnode21190", 00:18:53.276 "tgt_name": "foobar", 00:18:53.276 "method": "nvmf_create_subsystem", 00:18:53.276 "req_id": 1 00:18:53.276 } 00:18:53.276 Got JSON-RPC error response 00:18:53.276 response: 00:18:53.276 { 00:18:53.276 "code": -32603, 00:18:53.276 "message": "Unable to find target foobar" 00:18:53.276 }' 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:53.276 { 00:18:53.276 "nqn": "nqn.2016-06.io.spdk:cnode21190", 00:18:53.276 "tgt_name": "foobar", 00:18:53.276 "method": "nvmf_create_subsystem", 00:18:53.276 "req_id": 1 00:18:53.276 } 00:18:53.276 Got JSON-RPC error response 00:18:53.276 response: 00:18:53.276 { 00:18:53.276 "code": -32603, 00:18:53.276 "message": "Unable to find target foobar" 00:18:53.276 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:53.276 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:53.537 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18872 00:18:53.537 [2024-11-28 12:49:23.570176] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18872: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:53.537 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:53.537 { 00:18:53.537 "nqn": "nqn.2016-06.io.spdk:cnode18872", 00:18:53.537 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:53.537 "method": "nvmf_create_subsystem", 00:18:53.537 "req_id": 1 00:18:53.537 } 00:18:53.537 Got JSON-RPC error response 00:18:53.537 response: 00:18:53.537 { 00:18:53.537 "code": -32602, 00:18:53.537 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:53.537 }' 00:18:53.537 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:53.537 { 00:18:53.537 "nqn": "nqn.2016-06.io.spdk:cnode18872", 00:18:53.537 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:53.537 "method": "nvmf_create_subsystem", 00:18:53.537 "req_id": 1 00:18:53.537 } 00:18:53.537 Got JSON-RPC error response 00:18:53.537 response: 00:18:53.537 { 00:18:53.537 "code": -32602, 00:18:53.537 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:53.537 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:53.537 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:53.537 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18322 00:18:53.797 [2024-11-28 12:49:23.778492] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18322: invalid model number 'SPDK_Controller' 00:18:53.797 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:53.797 { 00:18:53.797 "nqn": "nqn.2016-06.io.spdk:cnode18322", 00:18:53.797 "model_number": "SPDK_Controller\u001f", 00:18:53.797 "method": "nvmf_create_subsystem", 00:18:53.797 "req_id": 1 00:18:53.797 } 00:18:53.797 Got JSON-RPC error response 00:18:53.797 response: 00:18:53.797 { 00:18:53.797 "code": -32602, 00:18:53.797 "message": "Invalid MN SPDK_Controller\u001f" 00:18:53.797 }' 00:18:53.797 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:53.797 { 00:18:53.797 "nqn": "nqn.2016-06.io.spdk:cnode18322", 00:18:53.797 "model_number": "SPDK_Controller\u001f", 00:18:53.797 "method": "nvmf_create_subsystem", 00:18:53.797 "req_id": 1 00:18:53.797 } 00:18:53.797 Got JSON-RPC error response 00:18:53.797 response: 00:18:53.797 { 00:18:53.797 "code": -32602, 00:18:53.798 "message": "Invalid MN SPDK_Controller\u001f" 00:18:53.798 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:53.798 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:18:54.059 12:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'P <#!"mn(/A>/uxQgkR"t' 00:18:54.059 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'P <#!"mn(/A>/uxQgkR"t' nqn.2016-06.io.spdk:cnode17798 00:18:54.059 [2024-11-28 12:49:24.167148] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17798: invalid serial number 'P <#!"mn(/A>/uxQgkR"t' 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:54.321 { 00:18:54.321 "nqn": "nqn.2016-06.io.spdk:cnode17798", 00:18:54.321 "serial_number": "P <#!\"mn(/A>/uxQgkR\"t", 00:18:54.321 "method": "nvmf_create_subsystem", 00:18:54.321 "req_id": 1 00:18:54.321 } 00:18:54.321 Got JSON-RPC error response 00:18:54.321 response: 00:18:54.321 { 00:18:54.321 "code": -32602, 00:18:54.321 "message": "Invalid SN P <#!\"mn(/A>/uxQgkR\"t" 00:18:54.321 }' 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:54.321 { 00:18:54.321 "nqn": "nqn.2016-06.io.spdk:cnode17798", 00:18:54.321 "serial_number": "P <#!\"mn(/A>/uxQgkR\"t", 00:18:54.321 "method": "nvmf_create_subsystem", 00:18:54.321 "req_id": 1 00:18:54.321 } 00:18:54.321 Got JSON-RPC error response 00:18:54.321 response: 00:18:54.321 { 00:18:54.321 "code": -32602, 00:18:54.321 "message": "Invalid SN P <#!\"mn(/A>/uxQgkR\"t" 00:18:54.321 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.321 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:54.322 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.323 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.323 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:54.323 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:54.323 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:54.323 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.323 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.583 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:18:54.584 12:49:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U%4MLa&}Hf=iJiop(~K:Twyes|cn]l /dev/null' 00:18:56.683 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.596 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.596 00:18:58.596 real 0m14.251s 00:18:58.596 user 0m20.805s 00:18:58.596 sys 0m6.897s 00:18:58.596 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.596 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:58.596 ************************************ 00:18:58.596 END TEST nvmf_invalid 00:18:58.596 ************************************ 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.857 ************************************ 00:18:58.857 START TEST nvmf_connect_stress 00:18:58.857 ************************************ 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:58.857 * Looking for test storage... 00:18:58.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.857 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.858 --rc genhtml_branch_coverage=1 00:18:58.858 --rc genhtml_function_coverage=1 00:18:58.858 --rc genhtml_legend=1 00:18:58.858 --rc geninfo_all_blocks=1 00:18:58.858 --rc geninfo_unexecuted_blocks=1 00:18:58.858 00:18:58.858 ' 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.858 --rc genhtml_branch_coverage=1 00:18:58.858 --rc genhtml_function_coverage=1 00:18:58.858 --rc genhtml_legend=1 00:18:58.858 --rc geninfo_all_blocks=1 00:18:58.858 --rc geninfo_unexecuted_blocks=1 00:18:58.858 00:18:58.858 ' 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.858 --rc genhtml_branch_coverage=1 00:18:58.858 --rc genhtml_function_coverage=1 00:18:58.858 --rc genhtml_legend=1 00:18:58.858 --rc geninfo_all_blocks=1 00:18:58.858 --rc geninfo_unexecuted_blocks=1 00:18:58.858 00:18:58.858 ' 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.858 --rc genhtml_branch_coverage=1 00:18:58.858 --rc genhtml_function_coverage=1 00:18:58.858 --rc genhtml_legend=1 00:18:58.858 --rc geninfo_all_blocks=1 00:18:58.858 --rc geninfo_unexecuted_blocks=1 00:18:58.858 00:18:58.858 ' 00:18:58.858 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.119 12:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.119 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.120 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.264 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:07.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:07.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:07.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:07.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:19:07.265 00:19:07.265 --- 10.0.0.2 ping statistics --- 00:19:07.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.265 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:19:07.265 00:19:07.265 --- 10.0.0.1 ping statistics --- 00:19:07.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.265 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.265 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3358832 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3358832 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3358832 ']' 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.266 12:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.266 [2024-11-28 12:49:36.632695] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:07.266 [2024-11-28 12:49:36.632760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.266 [2024-11-28 12:49:36.777368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:07.266 [2024-11-28 12:49:36.834561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.266 [2024-11-28 12:49:36.861731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.266 [2024-11-28 12:49:36.861777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.266 [2024-11-28 12:49:36.861786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.266 [2024-11-28 12:49:36.861793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.266 [2024-11-28 12:49:36.861800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.266 [2024-11-28 12:49:36.863594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.266 [2024-11-28 12:49:36.863757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.266 [2024-11-28 12:49:36.863758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.527 [2024-11-28 12:49:37.514591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.527 [2024-11-28 12:49:37.540288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.527 NULL1 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3359142 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.527 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.528 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.789 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.051 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.051 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:08.051 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.051 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.051 12:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.312 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.312 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:08.312 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.312 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.312 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.573 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.573 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:08.573 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.573 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.573 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.146 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.146 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:09.146 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.146 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.146 12:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.407 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.407 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:09.407 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.407 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.407 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.667 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.667 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:09.667 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.667 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.667 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.928 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.928 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:09.928 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.928 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.928 12:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:10.189 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.189 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:10.189 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:10.189 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.189 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:10.760 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.760 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:10.760 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:10.760 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.760 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.021 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.021 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:11.021 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.021 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.021 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.281 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:11.281 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.281 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.281 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.542 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.542 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:11.542 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.542 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.542 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.804 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.804 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:11.804 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.804 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.804 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:12.376 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.376 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:12.376 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:12.376 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.376 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:12.638 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.638 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:12.638 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:12.638 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.638 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:12.898 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.898 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:12.898 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:12.898 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.898 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:13.159 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.159 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:13.159 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:13.159 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.159 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:13.420 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.420 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:13.420 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:13.420 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.420 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:13.993 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.993 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:13.993 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:13.993 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.993 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:14.254 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.254 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:14.254 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:14.254 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.254 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:14.515 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.515 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:14.515 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:14.515 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.515 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:14.775 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.775 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:14.775 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:14.775 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.776 12:49:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:15.036 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.036 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:15.036 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:15.036 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.036 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:15.606 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.606 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:15.606 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:15.606 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.606 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.867 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:15.867 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:15.867 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.867 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.128 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.128 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:16.128 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:16.128 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.128 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.388 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.388 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:16.388 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:16.388 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.388 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.649 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.649 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:16.649 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:16.649 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.649 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:17.220 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.220 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:17.220 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:17.220 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.220 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:17.482 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.482 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:17.482 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:17.482 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.482 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:17.743 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.743 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:17.743 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:17.743 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.743 12:49:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:17.743 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3359142 00:19:18.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3359142) - No such process 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3359142 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.005 rmmod nvme_tcp 00:19:18.005 rmmod nvme_fabrics 00:19:18.005 rmmod nvme_keyring 00:19:18.005 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3358832 ']' 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3358832 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3358832 ']' 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3358832 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3358832 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3358832' 00:19:18.265 killing process with pid 3358832 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3358832 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3358832 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.265 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.812 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.812 00:19:20.812 real 0m21.606s 00:19:20.812 user 0m43.002s 00:19:20.812 sys 0m9.264s 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:20.813 ************************************ 00:19:20.813 END TEST nvmf_connect_stress 00:19:20.813 ************************************ 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.813 ************************************ 00:19:20.813 START TEST nvmf_fused_ordering 00:19:20.813 ************************************ 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:20.813 * Looking for test storage... 00:19:20.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.813 --rc genhtml_branch_coverage=1 00:19:20.813 --rc genhtml_function_coverage=1 00:19:20.813 --rc genhtml_legend=1 00:19:20.813 --rc geninfo_all_blocks=1 00:19:20.813 --rc geninfo_unexecuted_blocks=1 00:19:20.813 00:19:20.813 ' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.813 --rc genhtml_branch_coverage=1 00:19:20.813 --rc genhtml_function_coverage=1 00:19:20.813 --rc genhtml_legend=1 00:19:20.813 --rc geninfo_all_blocks=1 00:19:20.813 --rc geninfo_unexecuted_blocks=1 00:19:20.813 00:19:20.813 ' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.813 --rc genhtml_branch_coverage=1 00:19:20.813 --rc genhtml_function_coverage=1 00:19:20.813 --rc genhtml_legend=1 00:19:20.813 --rc geninfo_all_blocks=1 00:19:20.813 --rc geninfo_unexecuted_blocks=1 00:19:20.813 00:19:20.813 ' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.813 --rc genhtml_branch_coverage=1 00:19:20.813 --rc genhtml_function_coverage=1 00:19:20.813 --rc genhtml_legend=1 00:19:20.813 --rc geninfo_all_blocks=1 00:19:20.813 --rc geninfo_unexecuted_blocks=1 00:19:20.813 00:19:20.813 ' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.813 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.814 12:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:28.960 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:28.960 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:28.960 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:28.960 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.960 12:49:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:28.960 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:28.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:19:28.961 00:19:28.961 --- 10.0.0.2 ping statistics --- 00:19:28.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.961 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:19:28.961 00:19:28.961 --- 10.0.0.1 ping statistics --- 00:19:28.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.961 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3365365 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3365365 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3365365 ']' 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.961 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:28.961 [2024-11-28 12:49:58.274421] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:28.961 [2024-11-28 12:49:58.274496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.961 [2024-11-28 12:49:58.420046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:28.961 [2024-11-28 12:49:58.479612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.961 [2024-11-28 12:49:58.505803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.961 [2024-11-28 12:49:58.505844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.961 [2024-11-28 12:49:58.505853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.961 [2024-11-28 12:49:58.505866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.961 [2024-11-28 12:49:58.505873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.961 [2024-11-28 12:49:58.506610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.961 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.961 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:28.961 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.961 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.961 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 [2024-11-28 12:49:59.128437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 [2024-11-28 12:49:59.152637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 NULL1 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.223 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:29.223 [2024-11-28 12:49:59.222064] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:29.223 [2024-11-28 12:49:59.222108] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3365529 ] 00:19:29.484 [2024-11-28 12:49:59.356823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:29.746 Attached to nqn.2016-06.io.spdk:cnode1 00:19:29.746 Namespace ID: 1 size: 1GB 00:19:29.746 fused_ordering(0) 00:19:29.746 fused_ordering(1) 00:19:29.746 fused_ordering(2) 00:19:29.746 fused_ordering(3) 00:19:29.746 fused_ordering(4) 00:19:29.746 fused_ordering(5) 00:19:29.746 fused_ordering(6) 00:19:29.746 fused_ordering(7) 00:19:29.746 fused_ordering(8) 00:19:29.746 fused_ordering(9) 00:19:29.746 fused_ordering(10) 00:19:29.746 fused_ordering(11) 00:19:29.746 fused_ordering(12) 00:19:29.746 fused_ordering(13) 00:19:29.746 fused_ordering(14) 00:19:29.746 fused_ordering(15) 00:19:29.746 fused_ordering(16) 00:19:29.746 fused_ordering(17) 00:19:29.746 fused_ordering(18) 00:19:29.746 fused_ordering(19) 00:19:29.746 fused_ordering(20) 00:19:29.746 fused_ordering(21) 00:19:29.746 fused_ordering(22) 00:19:29.746 fused_ordering(23) 00:19:29.746 fused_ordering(24) 00:19:29.746 fused_ordering(25) 00:19:29.746 fused_ordering(26) 00:19:29.746 fused_ordering(27) 00:19:29.746 fused_ordering(28) 00:19:29.746 fused_ordering(29) 00:19:29.746 fused_ordering(30) 00:19:29.746 fused_ordering(31) 00:19:29.746 fused_ordering(32) 00:19:29.746 fused_ordering(33) 00:19:29.746 fused_ordering(34) 00:19:29.746 fused_ordering(35) 00:19:29.747 fused_ordering(36) 00:19:29.747 fused_ordering(37) 00:19:29.747 fused_ordering(38) 00:19:29.747 fused_ordering(39) 00:19:29.747 fused_ordering(40) 00:19:29.747 fused_ordering(41) 00:19:29.747 fused_ordering(42) 00:19:29.747 fused_ordering(43) 00:19:29.747 fused_ordering(44) 00:19:29.747 fused_ordering(45) 00:19:29.747 fused_ordering(46) 00:19:29.747 fused_ordering(47) 00:19:29.747 fused_ordering(48) 00:19:29.747 fused_ordering(49) 00:19:29.747 fused_ordering(50) 00:19:29.747 fused_ordering(51) 00:19:29.747 fused_ordering(52) 00:19:29.747 fused_ordering(53) 00:19:29.747 fused_ordering(54) 00:19:29.747 fused_ordering(55) 00:19:29.747 fused_ordering(56) 00:19:29.747 fused_ordering(57) 00:19:29.747 fused_ordering(58) 00:19:29.747 fused_ordering(59) 00:19:29.747 fused_ordering(60) 00:19:29.747 fused_ordering(61) 00:19:29.747 fused_ordering(62) 00:19:29.747 fused_ordering(63) 00:19:29.747 fused_ordering(64) 00:19:29.747 fused_ordering(65) 00:19:29.747 fused_ordering(66) 00:19:29.747 fused_ordering(67) 00:19:29.747 fused_ordering(68) 00:19:29.747 fused_ordering(69) 00:19:29.747 fused_ordering(70) 00:19:29.747 fused_ordering(71) 00:19:29.747 fused_ordering(72) 00:19:29.747 fused_ordering(73) 00:19:29.747 fused_ordering(74) 00:19:29.747 fused_ordering(75) 00:19:29.747 fused_ordering(76) 00:19:29.747 fused_ordering(77) 00:19:29.747 fused_ordering(78) 00:19:29.747 fused_ordering(79) 00:19:29.747 fused_ordering(80) 00:19:29.747 fused_ordering(81) 00:19:29.747 fused_ordering(82) 00:19:29.747 fused_ordering(83) 00:19:29.747 fused_ordering(84) 00:19:29.747 fused_ordering(85) 00:19:29.747 fused_ordering(86) 00:19:29.747 fused_ordering(87) 00:19:29.747 fused_ordering(88) 00:19:29.747 fused_ordering(89) 00:19:29.747 fused_ordering(90) 00:19:29.747 fused_ordering(91) 00:19:29.747 fused_ordering(92) 00:19:29.747 fused_ordering(93) 00:19:29.747 fused_ordering(94) 00:19:29.747 fused_ordering(95) 00:19:29.747 fused_ordering(96) 00:19:29.747 fused_ordering(97) 00:19:29.747 fused_ordering(98) 00:19:29.747 fused_ordering(99) 00:19:29.747 fused_ordering(100) 00:19:29.747 fused_ordering(101) 00:19:29.747 fused_ordering(102) 00:19:29.747 fused_ordering(103) 00:19:29.747 fused_ordering(104) 00:19:29.747 fused_ordering(105) 00:19:29.747 fused_ordering(106) 00:19:29.747 fused_ordering(107) 00:19:29.747 fused_ordering(108) 00:19:29.747 fused_ordering(109) 00:19:29.747 fused_ordering(110) 00:19:29.747 fused_ordering(111) 00:19:29.747 fused_ordering(112) 00:19:29.747 fused_ordering(113) 00:19:29.747 fused_ordering(114) 00:19:29.747 fused_ordering(115) 00:19:29.747 fused_ordering(116) 00:19:29.747 fused_ordering(117) 00:19:29.747 fused_ordering(118) 00:19:29.747 fused_ordering(119) 00:19:29.747 fused_ordering(120) 00:19:29.747 fused_ordering(121) 00:19:29.747 fused_ordering(122) 00:19:29.747 fused_ordering(123) 00:19:29.747 fused_ordering(124) 00:19:29.747 fused_ordering(125) 00:19:29.747 fused_ordering(126) 00:19:29.747 fused_ordering(127) 00:19:29.747 fused_ordering(128) 00:19:29.747 fused_ordering(129) 00:19:29.747 fused_ordering(130) 00:19:29.747 fused_ordering(131) 00:19:29.747 fused_ordering(132) 00:19:29.747 fused_ordering(133) 00:19:29.747 fused_ordering(134) 00:19:29.747 fused_ordering(135) 00:19:29.747 fused_ordering(136) 00:19:29.747 fused_ordering(137) 00:19:29.747 fused_ordering(138) 00:19:29.747 fused_ordering(139) 00:19:29.747 fused_ordering(140) 00:19:29.747 fused_ordering(141) 00:19:29.747 fused_ordering(142) 00:19:29.747 fused_ordering(143) 00:19:29.747 fused_ordering(144) 00:19:29.747 fused_ordering(145) 00:19:29.747 fused_ordering(146) 00:19:29.747 fused_ordering(147) 00:19:29.747 fused_ordering(148) 00:19:29.747 fused_ordering(149) 00:19:29.747 fused_ordering(150) 00:19:29.747 fused_ordering(151) 00:19:29.747 fused_ordering(152) 00:19:29.747 fused_ordering(153) 00:19:29.747 fused_ordering(154) 00:19:29.747 fused_ordering(155) 00:19:29.747 fused_ordering(156) 00:19:29.747 fused_ordering(157) 00:19:29.747 fused_ordering(158) 00:19:29.747 fused_ordering(159) 00:19:29.747 fused_ordering(160) 00:19:29.747 fused_ordering(161) 00:19:29.747 fused_ordering(162) 00:19:29.747 fused_ordering(163) 00:19:29.747 fused_ordering(164) 00:19:29.747 fused_ordering(165) 00:19:29.747 fused_ordering(166) 00:19:29.747 fused_ordering(167) 00:19:29.747 fused_ordering(168) 00:19:29.747 fused_ordering(169) 00:19:29.747 fused_ordering(170) 00:19:29.747 fused_ordering(171) 00:19:29.747 fused_ordering(172) 00:19:29.747 fused_ordering(173) 00:19:29.747 fused_ordering(174) 00:19:29.747 fused_ordering(175) 00:19:29.747 fused_ordering(176) 00:19:29.747 fused_ordering(177) 00:19:29.747 fused_ordering(178) 00:19:29.747 fused_ordering(179) 00:19:29.747 fused_ordering(180) 00:19:29.747 fused_ordering(181) 00:19:29.747 fused_ordering(182) 00:19:29.747 fused_ordering(183) 00:19:29.747 fused_ordering(184) 00:19:29.747 fused_ordering(185) 00:19:29.747 fused_ordering(186) 00:19:29.747 fused_ordering(187) 00:19:29.747 fused_ordering(188) 00:19:29.747 fused_ordering(189) 00:19:29.747 fused_ordering(190) 00:19:29.747 fused_ordering(191) 00:19:29.747 fused_ordering(192) 00:19:29.747 fused_ordering(193) 00:19:29.747 fused_ordering(194) 00:19:29.747 fused_ordering(195) 00:19:29.747 fused_ordering(196) 00:19:29.747 fused_ordering(197) 00:19:29.747 fused_ordering(198) 00:19:29.747 fused_ordering(199) 00:19:29.747 fused_ordering(200) 00:19:29.747 fused_ordering(201) 00:19:29.747 fused_ordering(202) 00:19:29.747 fused_ordering(203) 00:19:29.747 fused_ordering(204) 00:19:29.747 fused_ordering(205) 00:19:30.008 fused_ordering(206) 00:19:30.008 fused_ordering(207) 00:19:30.008 fused_ordering(208) 00:19:30.008 fused_ordering(209) 00:19:30.008 fused_ordering(210) 00:19:30.008 fused_ordering(211) 00:19:30.008 fused_ordering(212) 00:19:30.008 fused_ordering(213) 00:19:30.008 fused_ordering(214) 00:19:30.008 fused_ordering(215) 00:19:30.008 fused_ordering(216) 00:19:30.008 fused_ordering(217) 00:19:30.008 fused_ordering(218) 00:19:30.008 fused_ordering(219) 00:19:30.008 fused_ordering(220) 00:19:30.008 fused_ordering(221) 00:19:30.008 fused_ordering(222) 00:19:30.008 fused_ordering(223) 00:19:30.008 fused_ordering(224) 00:19:30.008 fused_ordering(225) 00:19:30.008 fused_ordering(226) 00:19:30.008 fused_ordering(227) 00:19:30.008 fused_ordering(228) 00:19:30.008 fused_ordering(229) 00:19:30.008 fused_ordering(230) 00:19:30.008 fused_ordering(231) 00:19:30.008 fused_ordering(232) 00:19:30.008 fused_ordering(233) 00:19:30.008 fused_ordering(234) 00:19:30.008 fused_ordering(235) 00:19:30.008 fused_ordering(236) 00:19:30.008 fused_ordering(237) 00:19:30.008 fused_ordering(238) 00:19:30.008 fused_ordering(239) 00:19:30.008 fused_ordering(240) 00:19:30.008 fused_ordering(241) 00:19:30.008 fused_ordering(242) 00:19:30.008 fused_ordering(243) 00:19:30.008 fused_ordering(244) 00:19:30.008 fused_ordering(245) 00:19:30.008 fused_ordering(246) 00:19:30.008 fused_ordering(247) 00:19:30.008 fused_ordering(248) 00:19:30.008 fused_ordering(249) 00:19:30.008 fused_ordering(250) 00:19:30.008 fused_ordering(251) 00:19:30.008 fused_ordering(252) 00:19:30.008 fused_ordering(253) 00:19:30.008 fused_ordering(254) 00:19:30.008 fused_ordering(255) 00:19:30.008 fused_ordering(256) 00:19:30.008 fused_ordering(257) 00:19:30.008 fused_ordering(258) 00:19:30.008 fused_ordering(259) 00:19:30.008 fused_ordering(260) 00:19:30.008 fused_ordering(261) 00:19:30.008 fused_ordering(262) 00:19:30.008 fused_ordering(263) 00:19:30.008 fused_ordering(264) 00:19:30.008 fused_ordering(265) 00:19:30.008 fused_ordering(266) 00:19:30.008 fused_ordering(267) 00:19:30.008 fused_ordering(268) 00:19:30.008 fused_ordering(269) 00:19:30.008 fused_ordering(270) 00:19:30.008 fused_ordering(271) 00:19:30.008 fused_ordering(272) 00:19:30.008 fused_ordering(273) 00:19:30.008 fused_ordering(274) 00:19:30.008 fused_ordering(275) 00:19:30.008 fused_ordering(276) 00:19:30.008 fused_ordering(277) 00:19:30.008 fused_ordering(278) 00:19:30.008 fused_ordering(279) 00:19:30.008 fused_ordering(280) 00:19:30.008 fused_ordering(281) 00:19:30.008 fused_ordering(282) 00:19:30.008 fused_ordering(283) 00:19:30.008 fused_ordering(284) 00:19:30.008 fused_ordering(285) 00:19:30.008 fused_ordering(286) 00:19:30.008 fused_ordering(287) 00:19:30.008 fused_ordering(288) 00:19:30.008 fused_ordering(289) 00:19:30.008 fused_ordering(290) 00:19:30.008 fused_ordering(291) 00:19:30.008 fused_ordering(292) 00:19:30.008 fused_ordering(293) 00:19:30.008 fused_ordering(294) 00:19:30.008 fused_ordering(295) 00:19:30.008 fused_ordering(296) 00:19:30.008 fused_ordering(297) 00:19:30.008 fused_ordering(298) 00:19:30.008 fused_ordering(299) 00:19:30.008 fused_ordering(300) 00:19:30.008 fused_ordering(301) 00:19:30.008 fused_ordering(302) 00:19:30.008 fused_ordering(303) 00:19:30.008 fused_ordering(304) 00:19:30.008 fused_ordering(305) 00:19:30.008 fused_ordering(306) 00:19:30.008 fused_ordering(307) 00:19:30.008 fused_ordering(308) 00:19:30.008 fused_ordering(309) 00:19:30.008 fused_ordering(310) 00:19:30.008 fused_ordering(311) 00:19:30.008 fused_ordering(312) 00:19:30.008 fused_ordering(313) 00:19:30.008 fused_ordering(314) 00:19:30.008 fused_ordering(315) 00:19:30.008 fused_ordering(316) 00:19:30.008 fused_ordering(317) 00:19:30.008 fused_ordering(318) 00:19:30.008 fused_ordering(319) 00:19:30.008 fused_ordering(320) 00:19:30.008 fused_ordering(321) 00:19:30.008 fused_ordering(322) 00:19:30.008 fused_ordering(323) 00:19:30.008 fused_ordering(324) 00:19:30.008 fused_ordering(325) 00:19:30.008 fused_ordering(326) 00:19:30.008 fused_ordering(327) 00:19:30.008 fused_ordering(328) 00:19:30.008 fused_ordering(329) 00:19:30.008 fused_ordering(330) 00:19:30.008 fused_ordering(331) 00:19:30.008 fused_ordering(332) 00:19:30.008 fused_ordering(333) 00:19:30.008 fused_ordering(334) 00:19:30.008 fused_ordering(335) 00:19:30.008 fused_ordering(336) 00:19:30.008 fused_ordering(337) 00:19:30.008 fused_ordering(338) 00:19:30.008 fused_ordering(339) 00:19:30.008 fused_ordering(340) 00:19:30.009 fused_ordering(341) 00:19:30.009 fused_ordering(342) 00:19:30.009 fused_ordering(343) 00:19:30.009 fused_ordering(344) 00:19:30.009 fused_ordering(345) 00:19:30.009 fused_ordering(346) 00:19:30.009 fused_ordering(347) 00:19:30.009 fused_ordering(348) 00:19:30.009 fused_ordering(349) 00:19:30.009 fused_ordering(350) 00:19:30.009 fused_ordering(351) 00:19:30.009 fused_ordering(352) 00:19:30.009 fused_ordering(353) 00:19:30.009 fused_ordering(354) 00:19:30.009 fused_ordering(355) 00:19:30.009 fused_ordering(356) 00:19:30.009 fused_ordering(357) 00:19:30.009 fused_ordering(358) 00:19:30.009 fused_ordering(359) 00:19:30.009 fused_ordering(360) 00:19:30.009 fused_ordering(361) 00:19:30.009 fused_ordering(362) 00:19:30.009 fused_ordering(363) 00:19:30.009 fused_ordering(364) 00:19:30.009 fused_ordering(365) 00:19:30.009 fused_ordering(366) 00:19:30.009 fused_ordering(367) 00:19:30.009 fused_ordering(368) 00:19:30.009 fused_ordering(369) 00:19:30.009 fused_ordering(370) 00:19:30.009 fused_ordering(371) 00:19:30.009 fused_ordering(372) 00:19:30.009 fused_ordering(373) 00:19:30.009 fused_ordering(374) 00:19:30.009 fused_ordering(375) 00:19:30.009 fused_ordering(376) 00:19:30.009 fused_ordering(377) 00:19:30.009 fused_ordering(378) 00:19:30.009 fused_ordering(379) 00:19:30.009 fused_ordering(380) 00:19:30.009 fused_ordering(381) 00:19:30.009 fused_ordering(382) 00:19:30.009 fused_ordering(383) 00:19:30.009 fused_ordering(384) 00:19:30.009 fused_ordering(385) 00:19:30.009 fused_ordering(386) 00:19:30.009 fused_ordering(387) 00:19:30.009 fused_ordering(388) 00:19:30.009 fused_ordering(389) 00:19:30.009 fused_ordering(390) 00:19:30.009 fused_ordering(391) 00:19:30.009 fused_ordering(392) 00:19:30.009 fused_ordering(393) 00:19:30.009 fused_ordering(394) 00:19:30.009 fused_ordering(395) 00:19:30.009 fused_ordering(396) 00:19:30.009 fused_ordering(397) 00:19:30.009 fused_ordering(398) 00:19:30.009 fused_ordering(399) 00:19:30.009 fused_ordering(400) 00:19:30.009 fused_ordering(401) 00:19:30.009 fused_ordering(402) 00:19:30.009 fused_ordering(403) 00:19:30.009 fused_ordering(404) 00:19:30.009 fused_ordering(405) 00:19:30.009 fused_ordering(406) 00:19:30.009 fused_ordering(407) 00:19:30.009 fused_ordering(408) 00:19:30.009 fused_ordering(409) 00:19:30.009 fused_ordering(410) 00:19:30.583 fused_ordering(411) 00:19:30.583 fused_ordering(412) 00:19:30.583 fused_ordering(413) 00:19:30.583 fused_ordering(414) 00:19:30.583 fused_ordering(415) 00:19:30.583 fused_ordering(416) 00:19:30.583 fused_ordering(417) 00:19:30.583 fused_ordering(418) 00:19:30.583 fused_ordering(419) 00:19:30.583 fused_ordering(420) 00:19:30.583 fused_ordering(421) 00:19:30.583 fused_ordering(422) 00:19:30.583 fused_ordering(423) 00:19:30.583 fused_ordering(424) 00:19:30.583 fused_ordering(425) 00:19:30.583 fused_ordering(426) 00:19:30.583 fused_ordering(427) 00:19:30.583 fused_ordering(428) 00:19:30.583 fused_ordering(429) 00:19:30.583 fused_ordering(430) 00:19:30.583 fused_ordering(431) 00:19:30.583 fused_ordering(432) 00:19:30.583 fused_ordering(433) 00:19:30.583 fused_ordering(434) 00:19:30.583 fused_ordering(435) 00:19:30.583 fused_ordering(436) 00:19:30.583 fused_ordering(437) 00:19:30.583 fused_ordering(438) 00:19:30.583 fused_ordering(439) 00:19:30.583 fused_ordering(440) 00:19:30.583 fused_ordering(441) 00:19:30.583 fused_ordering(442) 00:19:30.583 fused_ordering(443) 00:19:30.583 fused_ordering(444) 00:19:30.583 fused_ordering(445) 00:19:30.583 fused_ordering(446) 00:19:30.583 fused_ordering(447) 00:19:30.583 fused_ordering(448) 00:19:30.583 fused_ordering(449) 00:19:30.583 fused_ordering(450) 00:19:30.583 fused_ordering(451) 00:19:30.583 fused_ordering(452) 00:19:30.583 fused_ordering(453) 00:19:30.583 fused_ordering(454) 00:19:30.583 fused_ordering(455) 00:19:30.583 fused_ordering(456) 00:19:30.583 fused_ordering(457) 00:19:30.583 fused_ordering(458) 00:19:30.583 fused_ordering(459) 00:19:30.583 fused_ordering(460) 00:19:30.583 fused_ordering(461) 00:19:30.583 fused_ordering(462) 00:19:30.583 fused_ordering(463) 00:19:30.583 fused_ordering(464) 00:19:30.583 fused_ordering(465) 00:19:30.583 fused_ordering(466) 00:19:30.583 fused_ordering(467) 00:19:30.583 fused_ordering(468) 00:19:30.583 fused_ordering(469) 00:19:30.583 fused_ordering(470) 00:19:30.583 fused_ordering(471) 00:19:30.583 fused_ordering(472) 00:19:30.583 fused_ordering(473) 00:19:30.583 fused_ordering(474) 00:19:30.583 fused_ordering(475) 00:19:30.583 fused_ordering(476) 00:19:30.583 fused_ordering(477) 00:19:30.583 fused_ordering(478) 00:19:30.583 fused_ordering(479) 00:19:30.583 fused_ordering(480) 00:19:30.583 fused_ordering(481) 00:19:30.583 fused_ordering(482) 00:19:30.583 fused_ordering(483) 00:19:30.583 fused_ordering(484) 00:19:30.583 fused_ordering(485) 00:19:30.583 fused_ordering(486) 00:19:30.583 fused_ordering(487) 00:19:30.583 fused_ordering(488) 00:19:30.583 fused_ordering(489) 00:19:30.583 fused_ordering(490) 00:19:30.583 fused_ordering(491) 00:19:30.583 fused_ordering(492) 00:19:30.583 fused_ordering(493) 00:19:30.583 fused_ordering(494) 00:19:30.583 fused_ordering(495) 00:19:30.583 fused_ordering(496) 00:19:30.583 fused_ordering(497) 00:19:30.583 fused_ordering(498) 00:19:30.583 fused_ordering(499) 00:19:30.583 fused_ordering(500) 00:19:30.583 fused_ordering(501) 00:19:30.583 fused_ordering(502) 00:19:30.583 fused_ordering(503) 00:19:30.583 fused_ordering(504) 00:19:30.583 fused_ordering(505) 00:19:30.583 fused_ordering(506) 00:19:30.583 fused_ordering(507) 00:19:30.583 fused_ordering(508) 00:19:30.583 fused_ordering(509) 00:19:30.583 fused_ordering(510) 00:19:30.583 fused_ordering(511) 00:19:30.583 fused_ordering(512) 00:19:30.583 fused_ordering(513) 00:19:30.583 fused_ordering(514) 00:19:30.583 fused_ordering(515) 00:19:30.583 fused_ordering(516) 00:19:30.583 fused_ordering(517) 00:19:30.583 fused_ordering(518) 00:19:30.583 fused_ordering(519) 00:19:30.583 fused_ordering(520) 00:19:30.583 fused_ordering(521) 00:19:30.583 fused_ordering(522) 00:19:30.583 fused_ordering(523) 00:19:30.583 fused_ordering(524) 00:19:30.583 fused_ordering(525) 00:19:30.583 fused_ordering(526) 00:19:30.583 fused_ordering(527) 00:19:30.583 fused_ordering(528) 00:19:30.583 fused_ordering(529) 00:19:30.583 fused_ordering(530) 00:19:30.583 fused_ordering(531) 00:19:30.583 fused_ordering(532) 00:19:30.583 fused_ordering(533) 00:19:30.583 fused_ordering(534) 00:19:30.583 fused_ordering(535) 00:19:30.583 fused_ordering(536) 00:19:30.583 fused_ordering(537) 00:19:30.583 fused_ordering(538) 00:19:30.583 fused_ordering(539) 00:19:30.583 fused_ordering(540) 00:19:30.583 fused_ordering(541) 00:19:30.583 fused_ordering(542) 00:19:30.583 fused_ordering(543) 00:19:30.583 fused_ordering(544) 00:19:30.583 fused_ordering(545) 00:19:30.583 fused_ordering(546) 00:19:30.583 fused_ordering(547) 00:19:30.583 fused_ordering(548) 00:19:30.583 fused_ordering(549) 00:19:30.583 fused_ordering(550) 00:19:30.583 fused_ordering(551) 00:19:30.583 fused_ordering(552) 00:19:30.583 fused_ordering(553) 00:19:30.583 fused_ordering(554) 00:19:30.583 fused_ordering(555) 00:19:30.583 fused_ordering(556) 00:19:30.583 fused_ordering(557) 00:19:30.583 fused_ordering(558) 00:19:30.583 fused_ordering(559) 00:19:30.583 fused_ordering(560) 00:19:30.583 fused_ordering(561) 00:19:30.583 fused_ordering(562) 00:19:30.583 fused_ordering(563) 00:19:30.583 fused_ordering(564) 00:19:30.583 fused_ordering(565) 00:19:30.583 fused_ordering(566) 00:19:30.583 fused_ordering(567) 00:19:30.583 fused_ordering(568) 00:19:30.583 fused_ordering(569) 00:19:30.583 fused_ordering(570) 00:19:30.583 fused_ordering(571) 00:19:30.583 fused_ordering(572) 00:19:30.583 fused_ordering(573) 00:19:30.583 fused_ordering(574) 00:19:30.583 fused_ordering(575) 00:19:30.583 fused_ordering(576) 00:19:30.583 fused_ordering(577) 00:19:30.583 fused_ordering(578) 00:19:30.583 fused_ordering(579) 00:19:30.583 fused_ordering(580) 00:19:30.583 fused_ordering(581) 00:19:30.583 fused_ordering(582) 00:19:30.583 fused_ordering(583) 00:19:30.583 fused_ordering(584) 00:19:30.583 fused_ordering(585) 00:19:30.583 fused_ordering(586) 00:19:30.583 fused_ordering(587) 00:19:30.583 fused_ordering(588) 00:19:30.583 fused_ordering(589) 00:19:30.583 fused_ordering(590) 00:19:30.583 fused_ordering(591) 00:19:30.583 fused_ordering(592) 00:19:30.583 fused_ordering(593) 00:19:30.583 fused_ordering(594) 00:19:30.583 fused_ordering(595) 00:19:30.583 fused_ordering(596) 00:19:30.583 fused_ordering(597) 00:19:30.583 fused_ordering(598) 00:19:30.583 fused_ordering(599) 00:19:30.583 fused_ordering(600) 00:19:30.583 fused_ordering(601) 00:19:30.583 fused_ordering(602) 00:19:30.583 fused_ordering(603) 00:19:30.583 fused_ordering(604) 00:19:30.583 fused_ordering(605) 00:19:30.583 fused_ordering(606) 00:19:30.583 fused_ordering(607) 00:19:30.583 fused_ordering(608) 00:19:30.583 fused_ordering(609) 00:19:30.583 fused_ordering(610) 00:19:30.583 fused_ordering(611) 00:19:30.583 fused_ordering(612) 00:19:30.583 fused_ordering(613) 00:19:30.583 fused_ordering(614) 00:19:30.583 fused_ordering(615) 00:19:31.179 fused_ordering(616) 00:19:31.179 fused_ordering(617) 00:19:31.179 fused_ordering(618) 00:19:31.179 fused_ordering(619) 00:19:31.180 fused_ordering(620) 00:19:31.180 fused_ordering(621) 00:19:31.180 fused_ordering(622) 00:19:31.180 fused_ordering(623) 00:19:31.180 fused_ordering(624) 00:19:31.180 fused_ordering(625) 00:19:31.180 fused_ordering(626) 00:19:31.180 fused_ordering(627) 00:19:31.180 fused_ordering(628) 00:19:31.180 fused_ordering(629) 00:19:31.180 fused_ordering(630) 00:19:31.180 fused_ordering(631) 00:19:31.180 fused_ordering(632) 00:19:31.180 fused_ordering(633) 00:19:31.180 fused_ordering(634) 00:19:31.180 fused_ordering(635) 00:19:31.180 fused_ordering(636) 00:19:31.180 fused_ordering(637) 00:19:31.180 fused_ordering(638) 00:19:31.180 fused_ordering(639) 00:19:31.180 fused_ordering(640) 00:19:31.180 fused_ordering(641) 00:19:31.180 fused_ordering(642) 00:19:31.180 fused_ordering(643) 00:19:31.180 fused_ordering(644) 00:19:31.180 fused_ordering(645) 00:19:31.180 fused_ordering(646) 00:19:31.180 fused_ordering(647) 00:19:31.180 fused_ordering(648) 00:19:31.180 fused_ordering(649) 00:19:31.180 fused_ordering(650) 00:19:31.180 fused_ordering(651) 00:19:31.180 fused_ordering(652) 00:19:31.180 fused_ordering(653) 00:19:31.180 fused_ordering(654) 00:19:31.180 fused_ordering(655) 00:19:31.180 fused_ordering(656) 00:19:31.180 fused_ordering(657) 00:19:31.180 fused_ordering(658) 00:19:31.180 fused_ordering(659) 00:19:31.180 fused_ordering(660) 00:19:31.180 fused_ordering(661) 00:19:31.180 fused_ordering(662) 00:19:31.180 fused_ordering(663) 00:19:31.180 fused_ordering(664) 00:19:31.180 fused_ordering(665) 00:19:31.180 fused_ordering(666) 00:19:31.180 fused_ordering(667) 00:19:31.180 fused_ordering(668) 00:19:31.180 fused_ordering(669) 00:19:31.180 fused_ordering(670) 00:19:31.180 fused_ordering(671) 00:19:31.180 fused_ordering(672) 00:19:31.180 fused_ordering(673) 00:19:31.180 fused_ordering(674) 00:19:31.180 fused_ordering(675) 00:19:31.180 fused_ordering(676) 00:19:31.180 fused_ordering(677) 00:19:31.180 fused_ordering(678) 00:19:31.180 fused_ordering(679) 00:19:31.180 fused_ordering(680) 00:19:31.180 fused_ordering(681) 00:19:31.180 fused_ordering(682) 00:19:31.180 fused_ordering(683) 00:19:31.180 fused_ordering(684) 00:19:31.180 fused_ordering(685) 00:19:31.180 fused_ordering(686) 00:19:31.180 fused_ordering(687) 00:19:31.180 fused_ordering(688) 00:19:31.180 fused_ordering(689) 00:19:31.180 fused_ordering(690) 00:19:31.180 fused_ordering(691) 00:19:31.180 fused_ordering(692) 00:19:31.180 fused_ordering(693) 00:19:31.180 fused_ordering(694) 00:19:31.180 fused_ordering(695) 00:19:31.180 fused_ordering(696) 00:19:31.180 fused_ordering(697) 00:19:31.180 fused_ordering(698) 00:19:31.180 fused_ordering(699) 00:19:31.180 fused_ordering(700) 00:19:31.180 fused_ordering(701) 00:19:31.180 fused_ordering(702) 00:19:31.180 fused_ordering(703) 00:19:31.180 fused_ordering(704) 00:19:31.180 fused_ordering(705) 00:19:31.180 fused_ordering(706) 00:19:31.180 fused_ordering(707) 00:19:31.180 fused_ordering(708) 00:19:31.180 fused_ordering(709) 00:19:31.180 fused_ordering(710) 00:19:31.180 fused_ordering(711) 00:19:31.180 fused_ordering(712) 00:19:31.180 fused_ordering(713) 00:19:31.180 fused_ordering(714) 00:19:31.180 fused_ordering(715) 00:19:31.180 fused_ordering(716) 00:19:31.180 fused_ordering(717) 00:19:31.180 fused_ordering(718) 00:19:31.180 fused_ordering(719) 00:19:31.180 fused_ordering(720) 00:19:31.180 fused_ordering(721) 00:19:31.180 fused_ordering(722) 00:19:31.180 fused_ordering(723) 00:19:31.180 fused_ordering(724) 00:19:31.180 fused_ordering(725) 00:19:31.180 fused_ordering(726) 00:19:31.180 fused_ordering(727) 00:19:31.180 fused_ordering(728) 00:19:31.180 fused_ordering(729) 00:19:31.180 fused_ordering(730) 00:19:31.180 fused_ordering(731) 00:19:31.180 fused_ordering(732) 00:19:31.180 fused_ordering(733) 00:19:31.180 fused_ordering(734) 00:19:31.180 fused_ordering(735) 00:19:31.180 fused_ordering(736) 00:19:31.180 fused_ordering(737) 00:19:31.180 fused_ordering(738) 00:19:31.180 fused_ordering(739) 00:19:31.180 fused_ordering(740) 00:19:31.180 fused_ordering(741) 00:19:31.180 fused_ordering(742) 00:19:31.180 fused_ordering(743) 00:19:31.180 fused_ordering(744) 00:19:31.180 fused_ordering(745) 00:19:31.180 fused_ordering(746) 00:19:31.180 fused_ordering(747) 00:19:31.180 fused_ordering(748) 00:19:31.180 fused_ordering(749) 00:19:31.180 fused_ordering(750) 00:19:31.180 fused_ordering(751) 00:19:31.180 fused_ordering(752) 00:19:31.180 fused_ordering(753) 00:19:31.180 fused_ordering(754) 00:19:31.180 fused_ordering(755) 00:19:31.180 fused_ordering(756) 00:19:31.180 fused_ordering(757) 00:19:31.180 fused_ordering(758) 00:19:31.180 fused_ordering(759) 00:19:31.180 fused_ordering(760) 00:19:31.180 fused_ordering(761) 00:19:31.180 fused_ordering(762) 00:19:31.180 fused_ordering(763) 00:19:31.180 fused_ordering(764) 00:19:31.180 fused_ordering(765) 00:19:31.180 fused_ordering(766) 00:19:31.180 fused_ordering(767) 00:19:31.180 fused_ordering(768) 00:19:31.180 fused_ordering(769) 00:19:31.180 fused_ordering(770) 00:19:31.180 fused_ordering(771) 00:19:31.180 fused_ordering(772) 00:19:31.180 fused_ordering(773) 00:19:31.180 fused_ordering(774) 00:19:31.180 fused_ordering(775) 00:19:31.180 fused_ordering(776) 00:19:31.180 fused_ordering(777) 00:19:31.180 fused_ordering(778) 00:19:31.180 fused_ordering(779) 00:19:31.180 fused_ordering(780) 00:19:31.180 fused_ordering(781) 00:19:31.180 fused_ordering(782) 00:19:31.180 fused_ordering(783) 00:19:31.180 fused_ordering(784) 00:19:31.180 fused_ordering(785) 00:19:31.180 fused_ordering(786) 00:19:31.180 fused_ordering(787) 00:19:31.180 fused_ordering(788) 00:19:31.180 fused_ordering(789) 00:19:31.180 fused_ordering(790) 00:19:31.180 fused_ordering(791) 00:19:31.180 fused_ordering(792) 00:19:31.180 fused_ordering(793) 00:19:31.180 fused_ordering(794) 00:19:31.180 fused_ordering(795) 00:19:31.180 fused_ordering(796) 00:19:31.180 fused_ordering(797) 00:19:31.180 fused_ordering(798) 00:19:31.180 fused_ordering(799) 00:19:31.180 fused_ordering(800) 00:19:31.180 fused_ordering(801) 00:19:31.180 fused_ordering(802) 00:19:31.180 fused_ordering(803) 00:19:31.180 fused_ordering(804) 00:19:31.180 fused_ordering(805) 00:19:31.180 fused_ordering(806) 00:19:31.180 fused_ordering(807) 00:19:31.180 fused_ordering(808) 00:19:31.180 fused_ordering(809) 00:19:31.180 fused_ordering(810) 00:19:31.180 fused_ordering(811) 00:19:31.180 fused_ordering(812) 00:19:31.180 fused_ordering(813) 00:19:31.180 fused_ordering(814) 00:19:31.180 fused_ordering(815) 00:19:31.180 fused_ordering(816) 00:19:31.180 fused_ordering(817) 00:19:31.180 fused_ordering(818) 00:19:31.180 fused_ordering(819) 00:19:31.180 fused_ordering(820) 00:19:31.753 fused_ordering(821) 00:19:31.753 fused_ordering(822) 00:19:31.753 fused_ordering(823) 00:19:31.753 fused_ordering(824) 00:19:31.753 fused_ordering(825) 00:19:31.753 fused_ordering(826) 00:19:31.753 fused_ordering(827) 00:19:31.753 fused_ordering(828) 00:19:31.753 fused_ordering(829) 00:19:31.753 fused_ordering(830) 00:19:31.753 fused_ordering(831) 00:19:31.753 fused_ordering(832) 00:19:31.753 fused_ordering(833) 00:19:31.753 fused_ordering(834) 00:19:31.753 fused_ordering(835) 00:19:31.753 fused_ordering(836) 00:19:31.753 fused_ordering(837) 00:19:31.753 fused_ordering(838) 00:19:31.753 fused_ordering(839) 00:19:31.753 fused_ordering(840) 00:19:31.753 fused_ordering(841) 00:19:31.753 fused_ordering(842) 00:19:31.753 fused_ordering(843) 00:19:31.754 fused_ordering(844) 00:19:31.754 fused_ordering(845) 00:19:31.754 fused_ordering(846) 00:19:31.754 fused_ordering(847) 00:19:31.754 fused_ordering(848) 00:19:31.754 fused_ordering(849) 00:19:31.754 fused_ordering(850) 00:19:31.754 fused_ordering(851) 00:19:31.754 fused_ordering(852) 00:19:31.754 fused_ordering(853) 00:19:31.754 fused_ordering(854) 00:19:31.754 fused_ordering(855) 00:19:31.754 fused_ordering(856) 00:19:31.754 fused_ordering(857) 00:19:31.754 fused_ordering(858) 00:19:31.754 fused_ordering(859) 00:19:31.754 fused_ordering(860) 00:19:31.754 fused_ordering(861) 00:19:31.754 fused_ordering(862) 00:19:31.754 fused_ordering(863) 00:19:31.754 fused_ordering(864) 00:19:31.754 fused_ordering(865) 00:19:31.754 fused_ordering(866) 00:19:31.754 fused_ordering(867) 00:19:31.754 fused_ordering(868) 00:19:31.754 fused_ordering(869) 00:19:31.754 fused_ordering(870) 00:19:31.754 fused_ordering(871) 00:19:31.754 fused_ordering(872) 00:19:31.754 fused_ordering(873) 00:19:31.754 fused_ordering(874) 00:19:31.754 fused_ordering(875) 00:19:31.754 fused_ordering(876) 00:19:31.754 fused_ordering(877) 00:19:31.754 fused_ordering(878) 00:19:31.754 fused_ordering(879) 00:19:31.754 fused_ordering(880) 00:19:31.754 fused_ordering(881) 00:19:31.754 fused_ordering(882) 00:19:31.754 fused_ordering(883) 00:19:31.754 fused_ordering(884) 00:19:31.754 fused_ordering(885) 00:19:31.754 fused_ordering(886) 00:19:31.754 fused_ordering(887) 00:19:31.754 fused_ordering(888) 00:19:31.754 fused_ordering(889) 00:19:31.754 fused_ordering(890) 00:19:31.754 fused_ordering(891) 00:19:31.754 fused_ordering(892) 00:19:31.754 fused_ordering(893) 00:19:31.754 fused_ordering(894) 00:19:31.754 fused_ordering(895) 00:19:31.754 fused_ordering(896) 00:19:31.754 fused_ordering(897) 00:19:31.754 fused_ordering(898) 00:19:31.754 fused_ordering(899) 00:19:31.754 fused_ordering(900) 00:19:31.754 fused_ordering(901) 00:19:31.754 fused_ordering(902) 00:19:31.754 fused_ordering(903) 00:19:31.754 fused_ordering(904) 00:19:31.754 fused_ordering(905) 00:19:31.754 fused_ordering(906) 00:19:31.754 fused_ordering(907) 00:19:31.754 fused_ordering(908) 00:19:31.754 fused_ordering(909) 00:19:31.754 fused_ordering(910) 00:19:31.754 fused_ordering(911) 00:19:31.754 fused_ordering(912) 00:19:31.754 fused_ordering(913) 00:19:31.754 fused_ordering(914) 00:19:31.754 fused_ordering(915) 00:19:31.754 fused_ordering(916) 00:19:31.754 fused_ordering(917) 00:19:31.754 fused_ordering(918) 00:19:31.754 fused_ordering(919) 00:19:31.754 fused_ordering(920) 00:19:31.754 fused_ordering(921) 00:19:31.754 fused_ordering(922) 00:19:31.754 fused_ordering(923) 00:19:31.754 fused_ordering(924) 00:19:31.754 fused_ordering(925) 00:19:31.754 fused_ordering(926) 00:19:31.754 fused_ordering(927) 00:19:31.754 fused_ordering(928) 00:19:31.754 fused_ordering(929) 00:19:31.754 fused_ordering(930) 00:19:31.754 fused_ordering(931) 00:19:31.754 fused_ordering(932) 00:19:31.754 fused_ordering(933) 00:19:31.754 fused_ordering(934) 00:19:31.754 fused_ordering(935) 00:19:31.754 fused_ordering(936) 00:19:31.754 fused_ordering(937) 00:19:31.754 fused_ordering(938) 00:19:31.754 fused_ordering(939) 00:19:31.754 fused_ordering(940) 00:19:31.754 fused_ordering(941) 00:19:31.754 fused_ordering(942) 00:19:31.754 fused_ordering(943) 00:19:31.754 fused_ordering(944) 00:19:31.754 fused_ordering(945) 00:19:31.754 fused_ordering(946) 00:19:31.754 fused_ordering(947) 00:19:31.754 fused_ordering(948) 00:19:31.754 fused_ordering(949) 00:19:31.754 fused_ordering(950) 00:19:31.754 fused_ordering(951) 00:19:31.754 fused_ordering(952) 00:19:31.754 fused_ordering(953) 00:19:31.754 fused_ordering(954) 00:19:31.754 fused_ordering(955) 00:19:31.754 fused_ordering(956) 00:19:31.754 fused_ordering(957) 00:19:31.754 fused_ordering(958) 00:19:31.754 fused_ordering(959) 00:19:31.754 fused_ordering(960) 00:19:31.754 fused_ordering(961) 00:19:31.754 fused_ordering(962) 00:19:31.754 fused_ordering(963) 00:19:31.754 fused_ordering(964) 00:19:31.754 fused_ordering(965) 00:19:31.754 fused_ordering(966) 00:19:31.754 fused_ordering(967) 00:19:31.754 fused_ordering(968) 00:19:31.754 fused_ordering(969) 00:19:31.754 fused_ordering(970) 00:19:31.754 fused_ordering(971) 00:19:31.754 fused_ordering(972) 00:19:31.754 fused_ordering(973) 00:19:31.754 fused_ordering(974) 00:19:31.754 fused_ordering(975) 00:19:31.754 fused_ordering(976) 00:19:31.754 fused_ordering(977) 00:19:31.754 fused_ordering(978) 00:19:31.754 fused_ordering(979) 00:19:31.754 fused_ordering(980) 00:19:31.754 fused_ordering(981) 00:19:31.754 fused_ordering(982) 00:19:31.754 fused_ordering(983) 00:19:31.754 fused_ordering(984) 00:19:31.754 fused_ordering(985) 00:19:31.754 fused_ordering(986) 00:19:31.754 fused_ordering(987) 00:19:31.754 fused_ordering(988) 00:19:31.754 fused_ordering(989) 00:19:31.754 fused_ordering(990) 00:19:31.754 fused_ordering(991) 00:19:31.754 fused_ordering(992) 00:19:31.754 fused_ordering(993) 00:19:31.754 fused_ordering(994) 00:19:31.754 fused_ordering(995) 00:19:31.754 fused_ordering(996) 00:19:31.754 fused_ordering(997) 00:19:31.754 fused_ordering(998) 00:19:31.754 fused_ordering(999) 00:19:31.754 fused_ordering(1000) 00:19:31.754 fused_ordering(1001) 00:19:31.754 fused_ordering(1002) 00:19:31.754 fused_ordering(1003) 00:19:31.754 fused_ordering(1004) 00:19:31.754 fused_ordering(1005) 00:19:31.754 fused_ordering(1006) 00:19:31.754 fused_ordering(1007) 00:19:31.754 fused_ordering(1008) 00:19:31.754 fused_ordering(1009) 00:19:31.754 fused_ordering(1010) 00:19:31.754 fused_ordering(1011) 00:19:31.754 fused_ordering(1012) 00:19:31.754 fused_ordering(1013) 00:19:31.754 fused_ordering(1014) 00:19:31.754 fused_ordering(1015) 00:19:31.754 fused_ordering(1016) 00:19:31.754 fused_ordering(1017) 00:19:31.754 fused_ordering(1018) 00:19:31.754 fused_ordering(1019) 00:19:31.754 fused_ordering(1020) 00:19:31.754 fused_ordering(1021) 00:19:31.754 fused_ordering(1022) 00:19:31.754 fused_ordering(1023) 00:19:31.754 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:31.754 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:31.754 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.754 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.755 rmmod nvme_tcp 00:19:31.755 rmmod nvme_fabrics 00:19:31.755 rmmod nvme_keyring 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3365365 ']' 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3365365 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3365365 ']' 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3365365 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.755 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3365365 00:19:32.016 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:32.016 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:32.016 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3365365' 00:19:32.016 killing process with pid 3365365 00:19:32.016 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3365365 00:19:32.016 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3365365 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.016 12:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.583 00:19:34.583 real 0m13.659s 00:19:34.583 user 0m7.181s 00:19:34.583 sys 0m7.262s 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:34.583 ************************************ 00:19:34.583 END TEST nvmf_fused_ordering 00:19:34.583 ************************************ 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.583 ************************************ 00:19:34.583 START TEST nvmf_ns_masking 00:19:34.583 ************************************ 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:34.583 * Looking for test storage... 00:19:34.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:34.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.583 --rc genhtml_branch_coverage=1 00:19:34.583 --rc genhtml_function_coverage=1 00:19:34.583 --rc genhtml_legend=1 00:19:34.583 --rc geninfo_all_blocks=1 00:19:34.583 --rc geninfo_unexecuted_blocks=1 00:19:34.583 00:19:34.583 ' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:34.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.583 --rc genhtml_branch_coverage=1 00:19:34.583 --rc genhtml_function_coverage=1 00:19:34.583 --rc genhtml_legend=1 00:19:34.583 --rc geninfo_all_blocks=1 00:19:34.583 --rc geninfo_unexecuted_blocks=1 00:19:34.583 00:19:34.583 ' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:34.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.583 --rc genhtml_branch_coverage=1 00:19:34.583 --rc genhtml_function_coverage=1 00:19:34.583 --rc genhtml_legend=1 00:19:34.583 --rc geninfo_all_blocks=1 00:19:34.583 --rc geninfo_unexecuted_blocks=1 00:19:34.583 00:19:34.583 ' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:34.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.583 --rc genhtml_branch_coverage=1 00:19:34.583 --rc genhtml_function_coverage=1 00:19:34.583 --rc genhtml_legend=1 00:19:34.583 --rc geninfo_all_blocks=1 00:19:34.583 --rc geninfo_unexecuted_blocks=1 00:19:34.583 00:19:34.583 ' 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.583 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=20094e36-f8e9-423b-bade-04fa3b7bd916 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=477660d8-229b-4b8b-8140-2135db6aa674 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f4b5547a-a0db-497c-b2d0-c73fff818501 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.584 12:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.727 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:42.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:42.728 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:42.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:42.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:19:42.728 00:19:42.728 --- 10.0.0.2 ping statistics --- 00:19:42.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.728 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:19:42.728 00:19:42.728 --- 10.0.0.1 ping statistics --- 00:19:42.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.728 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.728 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3370209 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3370209 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3370209 ']' 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.728 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:42.728 [2024-11-28 12:50:12.099860] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:42.728 [2024-11-28 12:50:12.099932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.728 [2024-11-28 12:50:12.245154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:42.728 [2024-11-28 12:50:12.305203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.728 [2024-11-28 12:50:12.331427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.728 [2024-11-28 12:50:12.331473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.728 [2024-11-28 12:50:12.331482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.728 [2024-11-28 12:50:12.331489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.728 [2024-11-28 12:50:12.331495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.728 [2024-11-28 12:50:12.332287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.998 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:42.998 [2024-11-28 12:50:13.121393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.266 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:43.266 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:43.266 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:43.266 Malloc1 00:19:43.266 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:43.529 Malloc2 00:19:43.529 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:43.789 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:44.051 12:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.051 [2024-11-28 12:50:14.155031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4b5547a-a0db-497c-b2d0-c73fff818501 -a 10.0.0.2 -s 4420 -i 4 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:44.312 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:46.861 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:46.862 [ 0]:0x1 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cf9341ed2e5c4ca69cc77fbbcf75d407 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cf9341ed2e5c4ca69cc77fbbcf75d407 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:46.862 [ 0]:0x1 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cf9341ed2e5c4ca69cc77fbbcf75d407 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cf9341ed2e5c4ca69cc77fbbcf75d407 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:46.862 [ 1]:0x2 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:46.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.862 12:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.123 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4b5547a-a0db-497c-b2d0-c73fff818501 -a 10.0.0.2 -s 4420 -i 4 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:47.384 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:49.331 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:49.331 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:49.331 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:49.614 [ 0]:0x2 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:49.614 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:49.897 [ 0]:0x1 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cf9341ed2e5c4ca69cc77fbbcf75d407 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cf9341ed2e5c4ca69cc77fbbcf75d407 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:49.897 [ 1]:0x2 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:49.897 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:50.158 [ 0]:0x2 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:50.158 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:50.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.423 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:50.423 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:50.423 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4b5547a-a0db-497c-b2d0-c73fff818501 -a 10.0.0.2 -s 4420 -i 4 00:19:50.684 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:50.684 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:50.684 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.684 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:50.684 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:50.684 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:52.597 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:52.597 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:52.597 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:52.597 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:52.598 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:52.598 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:52.598 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:52.598 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:52.858 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:52.858 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:52.859 [ 0]:0x1 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cf9341ed2e5c4ca69cc77fbbcf75d407 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cf9341ed2e5c4ca69cc77fbbcf75d407 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:52.859 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:53.119 [ 1]:0x2 00:19:53.119 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:53.119 12:50:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:53.119 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:53.119 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:53.119 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:53.119 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:53.119 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:53.120 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:53.382 [ 0]:0x2 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:53.382 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:53.382 [2024-11-28 12:50:23.502651] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:53.644 request: 00:19:53.644 { 00:19:53.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.644 "nsid": 2, 00:19:53.644 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.644 "method": "nvmf_ns_remove_host", 00:19:53.644 "req_id": 1 00:19:53.644 } 00:19:53.644 Got JSON-RPC error response 00:19:53.644 response: 00:19:53.644 { 00:19:53.644 "code": -32602, 00:19:53.644 "message": "Invalid parameters" 00:19:53.644 } 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:53.644 [ 0]:0x2 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e285d349d1542ae8d7b59c0662e180a 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e285d349d1542ae8d7b59c0662e180a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:53.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3372712 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3372712 /var/tmp/host.sock 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3372712 ']' 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:53.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.644 12:50:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:53.644 [2024-11-28 12:50:23.763417] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:53.644 [2024-11-28 12:50:23.763468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372712 ] 00:19:53.905 [2024-11-28 12:50:23.896938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:53.905 [2024-11-28 12:50:23.954070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.905 [2024-11-28 12:50:23.972062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.475 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.475 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:54.475 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.736 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:54.998 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 20094e36-f8e9-423b-bade-04fa3b7bd916 00:19:54.998 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:54.998 12:50:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 20094E36F8E9423BBADE04FA3B7BD916 -i 00:19:54.998 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 477660d8-229b-4b8b-8140-2135db6aa674 00:19:54.998 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:54.998 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 477660D8229B4B8B81402135DB6AA674 -i 00:19:55.258 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:55.519 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:55.780 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:55.780 12:50:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:56.040 nvme0n1 00:19:56.040 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:56.040 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:56.300 nvme1n2 00:19:56.300 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:56.300 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:56.300 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:56.300 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:56.300 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:56.560 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:56.560 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:56.560 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:56.560 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:56.822 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 20094e36-f8e9-423b-bade-04fa3b7bd916 == \2\0\0\9\4\e\3\6\-\f\8\e\9\-\4\2\3\b\-\b\a\d\e\-\0\4\f\a\3\b\7\b\d\9\1\6 ]] 00:19:56.822 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:56.822 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:56.822 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:57.082 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 477660d8-229b-4b8b-8140-2135db6aa674 == \4\7\7\6\6\0\d\8\-\2\2\9\b\-\4\b\8\b\-\8\1\4\0\-\2\1\3\5\d\b\6\a\a\6\7\4 ]] 00:19:57.082 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.082 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 20094e36-f8e9-423b-bade-04fa3b7bd916 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 20094E36F8E9423BBADE04FA3B7BD916 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 20094E36F8E9423BBADE04FA3B7BD916 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:57.342 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 20094E36F8E9423BBADE04FA3B7BD916 00:19:57.603 [2024-11-28 12:50:27.471817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:57.603 [2024-11-28 12:50:27.471842] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:57.603 [2024-11-28 12:50:27.471848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.603 request: 00:19:57.603 { 00:19:57.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.603 "namespace": { 00:19:57.603 "bdev_name": "invalid", 00:19:57.603 "nsid": 1, 00:19:57.603 "nguid": "20094E36F8E9423BBADE04FA3B7BD916", 00:19:57.603 "no_auto_visible": false, 00:19:57.603 "hide_metadata": false 00:19:57.603 }, 00:19:57.603 "method": "nvmf_subsystem_add_ns", 00:19:57.603 "req_id": 1 00:19:57.603 } 00:19:57.603 Got JSON-RPC error response 00:19:57.603 response: 00:19:57.603 { 00:19:57.603 "code": -32602, 00:19:57.604 "message": "Invalid parameters" 00:19:57.604 } 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 20094e36-f8e9-423b-bade-04fa3b7bd916 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 20094E36F8E9423BBADE04FA3B7BD916 -i 00:19:57.604 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3372712 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3372712 ']' 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3372712 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3372712 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3372712' 00:20:00.161 killing process with pid 3372712 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3372712 00:20:00.161 12:50:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3372712 00:20:00.161 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:00.424 rmmod nvme_tcp 00:20:00.424 rmmod nvme_fabrics 00:20:00.424 rmmod nvme_keyring 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3370209 ']' 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3370209 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3370209 ']' 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3370209 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3370209 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3370209' 00:20:00.424 killing process with pid 3370209 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3370209 00:20:00.424 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3370209 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.684 12:50:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:02.594 00:20:02.594 real 0m28.464s 00:20:02.594 user 0m32.332s 00:20:02.594 sys 0m8.309s 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:02.594 ************************************ 00:20:02.594 END TEST nvmf_ns_masking 00:20:02.594 ************************************ 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.594 12:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.855 ************************************ 00:20:02.855 START TEST nvmf_nvme_cli 00:20:02.855 ************************************ 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:02.855 * Looking for test storage... 00:20:02.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.855 --rc genhtml_branch_coverage=1 00:20:02.855 --rc genhtml_function_coverage=1 00:20:02.855 --rc genhtml_legend=1 00:20:02.855 --rc geninfo_all_blocks=1 00:20:02.855 --rc geninfo_unexecuted_blocks=1 00:20:02.855 00:20:02.855 ' 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.855 --rc genhtml_branch_coverage=1 00:20:02.855 --rc genhtml_function_coverage=1 00:20:02.855 --rc genhtml_legend=1 00:20:02.855 --rc geninfo_all_blocks=1 00:20:02.855 --rc geninfo_unexecuted_blocks=1 00:20:02.855 00:20:02.855 ' 00:20:02.855 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.855 --rc genhtml_branch_coverage=1 00:20:02.855 --rc genhtml_function_coverage=1 00:20:02.855 --rc genhtml_legend=1 00:20:02.856 --rc geninfo_all_blocks=1 00:20:02.856 --rc geninfo_unexecuted_blocks=1 00:20:02.856 00:20:02.856 ' 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.856 --rc genhtml_branch_coverage=1 00:20:02.856 --rc genhtml_function_coverage=1 00:20:02.856 --rc genhtml_legend=1 00:20:02.856 --rc geninfo_all_blocks=1 00:20:02.856 --rc geninfo_unexecuted_blocks=1 00:20:02.856 00:20:02.856 ' 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.856 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.117 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.117 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.117 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.117 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.117 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:11.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:11.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:11.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:11.258 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.258 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:20:11.258 00:20:11.258 --- 10.0.0.2 ping statistics --- 00:20:11.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.258 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:20:11.259 00:20:11.259 --- 10.0.0.1 ping statistics --- 00:20:11.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.259 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3378125 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3378125 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3378125 ']' 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.259 12:50:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.259 [2024-11-28 12:50:40.596079] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:11.259 [2024-11-28 12:50:40.596143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.259 [2024-11-28 12:50:40.740868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.259 [2024-11-28 12:50:40.799848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.259 [2024-11-28 12:50:40.829544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.259 [2024-11-28 12:50:40.829592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.259 [2024-11-28 12:50:40.829601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.259 [2024-11-28 12:50:40.829608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.259 [2024-11-28 12:50:40.829615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.259 [2024-11-28 12:50:40.831619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.259 [2024-11-28 12:50:40.831781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.259 [2024-11-28 12:50:40.831939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.259 [2024-11-28 12:50:40.831939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 [2024-11-28 12:50:41.479731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 Malloc0 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 Malloc1 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 [2024-11-28 12:50:41.590407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.520 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:20:11.782 00:20:11.782 Discovery Log Number of Records 2, Generation counter 2 00:20:11.782 =====Discovery Log Entry 0====== 00:20:11.782 trtype: tcp 00:20:11.782 adrfam: ipv4 00:20:11.782 subtype: current discovery subsystem 00:20:11.782 treq: not required 00:20:11.782 portid: 0 00:20:11.782 trsvcid: 4420 00:20:11.782 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:11.782 traddr: 10.0.0.2 00:20:11.782 eflags: explicit discovery connections, duplicate discovery information 00:20:11.782 sectype: none 00:20:11.782 =====Discovery Log Entry 1====== 00:20:11.782 trtype: tcp 00:20:11.782 adrfam: ipv4 00:20:11.782 subtype: nvme subsystem 00:20:11.782 treq: not required 00:20:11.782 portid: 0 00:20:11.782 trsvcid: 4420 00:20:11.782 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:11.782 traddr: 10.0.0.2 00:20:11.782 eflags: none 00:20:11.782 sectype: none 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:11.782 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:13.696 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:13.696 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:20:13.696 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:13.696 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:13.696 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:13.696 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:15.635 /dev/nvme0n2 ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:15.635 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:15.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.897 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.897 rmmod nvme_tcp 00:20:15.897 rmmod nvme_fabrics 00:20:15.897 rmmod nvme_keyring 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3378125 ']' 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3378125 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3378125 ']' 00:20:16.157 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3378125 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3378125 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3378125' 00:20:16.158 killing process with pid 3378125 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3378125 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3378125 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.158 12:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:18.705 00:20:18.705 real 0m15.559s 00:20:18.705 user 0m23.755s 00:20:18.705 sys 0m6.461s 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:18.705 ************************************ 00:20:18.705 END TEST nvmf_nvme_cli 00:20:18.705 ************************************ 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:18.705 ************************************ 00:20:18.705 START TEST nvmf_vfio_user 00:20:18.705 ************************************ 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:18.705 * Looking for test storage... 00:20:18.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.705 --rc genhtml_branch_coverage=1 00:20:18.705 --rc genhtml_function_coverage=1 00:20:18.705 --rc genhtml_legend=1 00:20:18.705 --rc geninfo_all_blocks=1 00:20:18.705 --rc geninfo_unexecuted_blocks=1 00:20:18.705 00:20:18.705 ' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.705 --rc genhtml_branch_coverage=1 00:20:18.705 --rc genhtml_function_coverage=1 00:20:18.705 --rc genhtml_legend=1 00:20:18.705 --rc geninfo_all_blocks=1 00:20:18.705 --rc geninfo_unexecuted_blocks=1 00:20:18.705 00:20:18.705 ' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.705 --rc genhtml_branch_coverage=1 00:20:18.705 --rc genhtml_function_coverage=1 00:20:18.705 --rc genhtml_legend=1 00:20:18.705 --rc geninfo_all_blocks=1 00:20:18.705 --rc geninfo_unexecuted_blocks=1 00:20:18.705 00:20:18.705 ' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:18.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.705 --rc genhtml_branch_coverage=1 00:20:18.705 --rc genhtml_function_coverage=1 00:20:18.705 --rc genhtml_legend=1 00:20:18.705 --rc geninfo_all_blocks=1 00:20:18.705 --rc geninfo_unexecuted_blocks=1 00:20:18.705 00:20:18.705 ' 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.705 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:18.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3379919 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3379919' 00:20:18.706 Process pid: 3379919 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3379919 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3379919 ']' 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.706 12:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:18.706 [2024-11-28 12:50:48.715008] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:18.706 [2024-11-28 12:50:48.715077] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.967 [2024-11-28 12:50:48.851532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:18.967 [2024-11-28 12:50:48.893142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.967 [2024-11-28 12:50:48.910024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.967 [2024-11-28 12:50:48.910055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.967 [2024-11-28 12:50:48.910061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.967 [2024-11-28 12:50:48.910066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.967 [2024-11-28 12:50:48.910070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.967 [2024-11-28 12:50:48.911244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.967 [2024-11-28 12:50:48.911437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.967 [2024-11-28 12:50:48.911438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.967 [2024-11-28 12:50:48.911289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.539 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.539 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:19.539 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:20.481 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:20:20.742 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:20.742 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:20.742 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:20.742 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:20.742 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:21.003 Malloc1 00:20:21.003 12:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:21.003 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:21.265 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:21.524 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:21.524 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:21.524 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:21.524 Malloc2 00:20:21.784 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:21.784 12:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:22.045 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:22.306 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:22.306 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:22.306 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:22.306 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:22.306 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:22.306 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:22.306 [2024-11-28 12:50:52.230244] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:22.306 [2024-11-28 12:50:52.230265] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380613 ] 00:20:22.306 [2024-11-28 12:50:52.338099] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:22.306 [2024-11-28 12:50:52.365056] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:22.306 [2024-11-28 12:50:52.373360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:22.306 [2024-11-28 12:50:52.373374] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe5d0da6000 00:20:22.306 [2024-11-28 12:50:52.374365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.375364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.376364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.377369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.378370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.379377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.380377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.381382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.306 [2024-11-28 12:50:52.382390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:22.306 [2024-11-28 12:50:52.382397] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe5cfaa7000 00:20:22.306 [2024-11-28 12:50:52.383309] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:22.306 [2024-11-28 12:50:52.396759] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:22.306 [2024-11-28 12:50:52.396779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:20:22.306 [2024-11-28 12:50:52.399448] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:22.306 [2024-11-28 12:50:52.399484] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:22.306 [2024-11-28 12:50:52.399549] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:20:22.306 [2024-11-28 12:50:52.399562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:20:22.306 [2024-11-28 12:50:52.399566] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:20:22.306 [2024-11-28 12:50:52.400443] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:22.306 [2024-11-28 12:50:52.400451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:20:22.306 [2024-11-28 12:50:52.400457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:20:22.307 [2024-11-28 12:50:52.401441] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:22.307 [2024-11-28 12:50:52.401448] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:20:22.307 [2024-11-28 12:50:52.401453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:22.307 [2024-11-28 12:50:52.402452] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:22.307 [2024-11-28 12:50:52.402458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:22.307 [2024-11-28 12:50:52.403456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:22.307 [2024-11-28 12:50:52.403462] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:22.307 [2024-11-28 12:50:52.403466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:22.307 [2024-11-28 12:50:52.403471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:22.307 [2024-11-28 12:50:52.403575] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:20:22.307 [2024-11-28 12:50:52.403578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:22.307 [2024-11-28 12:50:52.403582] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003e4000 00:20:22.307 [2024-11-28 12:50:52.404461] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003e2000 00:20:22.307 [2024-11-28 12:50:52.405459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:22.307 [2024-11-28 12:50:52.406467] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:22.307 [2024-11-28 12:50:52.407462] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:22.307 [2024-11-28 12:50:52.407510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:22.307 [2024-11-28 12:50:52.408468] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:22.307 [2024-11-28 12:50:52.408473] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:22.307 [2024-11-28 12:50:52.408477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:20:22.307 [2024-11-28 12:50:52.408498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:4096 00:20:22.307 [2024-11-28 12:50:52.408517] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:22.307 [2024-11-28 12:50:52.408519] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.408530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408577] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:20:22.307 [2024-11-28 12:50:52.408580] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:20:22.307 [2024-11-28 12:50:52.408583] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:20:22.307 [2024-11-28 12:50:52.408588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:22.307 [2024-11-28 12:50:52.408592] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:20:22.307 [2024-11-28 12:50:52.408595] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:20:22.307 [2024-11-28 12:50:52.408598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.307 [2024-11-28 12:50:52.408640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.307 [2024-11-28 12:50:52.408646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.307 [2024-11-28 12:50:52.408652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.307 [2024-11-28 12:50:52.408656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408684] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:20:22.307 [2024-11-28 12:50:52.408688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408771] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031d000 len:4096 00:20:22.307 [2024-11-28 12:50:52.408775] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031d000 00:20:22.307 [2024-11-28 12:50:52.408778] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.408782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x20000031d000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408802] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:20:22.307 [2024-11-28 12:50:52.408811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408821] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:4096 00:20:22.307 [2024-11-28 12:50:52.408824] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:22.307 [2024-11-28 12:50:52.408827] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.408831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408869] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:4096 00:20:22.307 [2024-11-28 12:50:52.408872] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:22.307 [2024-11-28 12:50:52.408874] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.408878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408923] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:22.307 [2024-11-28 12:50:52.408928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:20:22.307 [2024-11-28 12:50:52.408932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:20:22.307 [2024-11-28 12:50:52.408945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.408986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.408994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.409004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.409013] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031a000 len:8192 00:20:22.307 [2024-11-28 12:50:52.409016] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031a000 00:20:22.307 [2024-11-28 12:50:52.409019] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x20000031b000 00:20:22.307 [2024-11-28 12:50:52.409021] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x20000031b000 00:20:22.307 [2024-11-28 12:50:52.409024] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:22.307 [2024-11-28 12:50:52.409028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x20000031a000 PRP2 0x20000031b000 00:20:22.307 [2024-11-28 12:50:52.409033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x200000320000 len:512 00:20:22.307 [2024-11-28 12:50:52.409036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x200000320000 00:20:22.307 [2024-11-28 12:50:52.409039] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.409043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x200000320000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.409048] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:512 00:20:22.307 [2024-11-28 12:50:52.409051] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:22.307 [2024-11-28 12:50:52.409053] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.409058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.409063] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x200000318000 len:4096 00:20:22.307 [2024-11-28 12:50:52.409066] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x200000318000 00:20:22.307 [2024-11-28 12:50:52.409068] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.307 [2024-11-28 12:50:52.409073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x200000318000 PRP2 0x0 00:20:22.307 [2024-11-28 12:50:52.409079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.409088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.409096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:22.307 [2024-11-28 12:50:52.409101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:22.307 ===================================================== 00:20:22.307 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:22.307 ===================================================== 00:20:22.307 Controller Capabilities/Features 00:20:22.307 ================================ 00:20:22.307 Vendor ID: 4e58 00:20:22.307 Subsystem Vendor ID: 4e58 00:20:22.307 Serial Number: SPDK1 00:20:22.307 Model Number: SPDK bdev Controller 00:20:22.307 Firmware Version: 25.01 00:20:22.307 Recommended Arb Burst: 6 00:20:22.307 IEEE OUI Identifier: 8d 6b 50 00:20:22.307 Multi-path I/O 00:20:22.307 May have multiple subsystem ports: Yes 00:20:22.307 May have multiple controllers: Yes 00:20:22.307 Associated with SR-IOV VF: No 00:20:22.307 Max Data Transfer Size: 131072 00:20:22.307 Max Number of Namespaces: 32 00:20:22.307 Max Number of I/O Queues: 127 00:20:22.307 NVMe Specification Version (VS): 1.3 00:20:22.307 NVMe Specification Version (Identify): 1.3 00:20:22.307 Maximum Queue Entries: 256 00:20:22.307 Contiguous Queues Required: Yes 00:20:22.307 Arbitration Mechanisms Supported 00:20:22.307 Weighted Round Robin: Not Supported 00:20:22.307 Vendor Specific: Not Supported 00:20:22.307 Reset Timeout: 15000 ms 00:20:22.307 Doorbell Stride: 4 bytes 00:20:22.307 NVM Subsystem Reset: Not Supported 00:20:22.307 Command Sets Supported 00:20:22.307 NVM Command Set: Supported 00:20:22.307 Boot Partition: Not Supported 00:20:22.307 Memory Page Size Minimum: 4096 bytes 00:20:22.307 Memory Page Size Maximum: 4096 bytes 00:20:22.307 Persistent Memory Region: Not Supported 00:20:22.307 Optional Asynchronous Events Supported 00:20:22.307 Namespace Attribute Notices: Supported 00:20:22.307 Firmware Activation Notices: Not Supported 00:20:22.307 ANA Change Notices: Not Supported 00:20:22.307 PLE Aggregate Log Change Notices: Not Supported 00:20:22.307 LBA Status Info Alert Notices: Not Supported 00:20:22.307 EGE Aggregate Log Change Notices: Not Supported 00:20:22.307 Normal NVM Subsystem Shutdown event: Not Supported 00:20:22.307 Zone Descriptor Change Notices: Not Supported 00:20:22.307 Discovery Log Change Notices: Not Supported 00:20:22.307 Controller Attributes 00:20:22.307 128-bit Host Identifier: Supported 00:20:22.307 Non-Operational Permissive Mode: Not Supported 00:20:22.308 NVM Sets: Not Supported 00:20:22.308 Read Recovery Levels: Not Supported 00:20:22.308 Endurance Groups: Not Supported 00:20:22.308 Predictable Latency Mode: Not Supported 00:20:22.308 Traffic Based Keep ALive: Not Supported 00:20:22.308 Namespace Granularity: Not Supported 00:20:22.308 SQ Associations: Not Supported 00:20:22.308 UUID List: Not Supported 00:20:22.308 Multi-Domain Subsystem: Not Supported 00:20:22.308 Fixed Capacity Management: Not Supported 00:20:22.308 Variable Capacity Management: Not Supported 00:20:22.308 Delete Endurance Group: Not Supported 00:20:22.308 Delete NVM Set: Not Supported 00:20:22.308 Extended LBA Formats Supported: Not Supported 00:20:22.308 Flexible Data Placement Supported: Not Supported 00:20:22.308 00:20:22.308 Controller Memory Buffer Support 00:20:22.308 ================================ 00:20:22.308 Supported: No 00:20:22.308 00:20:22.308 Persistent Memory Region Support 00:20:22.308 ================================ 00:20:22.308 Supported: No 00:20:22.308 00:20:22.308 Admin Command Set Attributes 00:20:22.308 ============================ 00:20:22.308 Security Send/Receive: Not Supported 00:20:22.308 Format NVM: Not Supported 00:20:22.308 Firmware Activate/Download: Not Supported 00:20:22.308 Namespace Management: Not Supported 00:20:22.308 Device Self-Test: Not Supported 00:20:22.308 Directives: Not Supported 00:20:22.308 NVMe-MI: Not Supported 00:20:22.308 Virtualization Management: Not Supported 00:20:22.308 Doorbell Buffer Config: Not Supported 00:20:22.308 Get LBA Status Capability: Not Supported 00:20:22.308 Command & Feature Lockdown Capability: Not Supported 00:20:22.308 Abort Command Limit: 4 00:20:22.308 Async Event Request Limit: 4 00:20:22.308 Number of Firmware Slots: N/A 00:20:22.308 Firmware Slot 1 Read-Only: N/A 00:20:22.308 Firmware Activation Without Reset: N/A 00:20:22.308 Multiple Update Detection Support: N/A 00:20:22.308 Firmware Update Granularity: No Information Provided 00:20:22.308 Per-Namespace SMART Log: No 00:20:22.308 Asymmetric Namespace Access Log Page: Not Supported 00:20:22.308 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:22.308 Command Effects Log Page: Supported 00:20:22.308 Get Log Page Extended Data: Supported 00:20:22.308 Telemetry Log Pages: Not Supported 00:20:22.308 Persistent Event Log Pages: Not Supported 00:20:22.308 Supported Log Pages Log Page: May Support 00:20:22.308 Commands Supported & Effects Log Page: Not Supported 00:20:22.308 Feature Identifiers & Effects Log Page:May Support 00:20:22.308 NVMe-MI Commands & Effects Log Page: May Support 00:20:22.308 Data Area 4 for Telemetry Log: Not Supported 00:20:22.308 Error Log Page Entries Supported: 128 00:20:22.308 Keep Alive: Supported 00:20:22.308 Keep Alive Granularity: 10000 ms 00:20:22.308 00:20:22.308 NVM Command Set Attributes 00:20:22.308 ========================== 00:20:22.308 Submission Queue Entry Size 00:20:22.308 Max: 64 00:20:22.308 Min: 64 00:20:22.308 Completion Queue Entry Size 00:20:22.308 Max: 16 00:20:22.308 Min: 16 00:20:22.308 Number of Namespaces: 32 00:20:22.308 Compare Command: Supported 00:20:22.308 Write Uncorrectable Command: Not Supported 00:20:22.308 Dataset Management Command: Supported 00:20:22.308 Write Zeroes Command: Supported 00:20:22.308 Set Features Save Field: Not Supported 00:20:22.308 Reservations: Not Supported 00:20:22.308 Timestamp: Not Supported 00:20:22.308 Copy: Supported 00:20:22.308 Volatile Write Cache: Present 00:20:22.308 Atomic Write Unit (Normal): 1 00:20:22.308 Atomic Write Unit (PFail): 1 00:20:22.308 Atomic Compare & Write Unit: 1 00:20:22.308 Fused Compare & Write: Supported 00:20:22.308 Scatter-Gather List 00:20:22.308 SGL Command Set: Supported (Dword aligned) 00:20:22.308 SGL Keyed: Not Supported 00:20:22.308 SGL Bit Bucket Descriptor: Not Supported 00:20:22.308 SGL Metadata Pointer: Not Supported 00:20:22.308 Oversized SGL: Not Supported 00:20:22.308 SGL Metadata Address: Not Supported 00:20:22.308 SGL Offset: Not Supported 00:20:22.308 Transport SGL Data Block: Not Supported 00:20:22.308 Replay Protected Memory Block: Not Supported 00:20:22.308 00:20:22.308 Firmware Slot Information 00:20:22.308 ========================= 00:20:22.308 Active slot: 1 00:20:22.308 Slot 1 Firmware Revision: 25.01 00:20:22.308 00:20:22.308 00:20:22.308 Commands Supported and Effects 00:20:22.308 ============================== 00:20:22.308 Admin Commands 00:20:22.308 -------------- 00:20:22.308 Get Log Page (02h): Supported 00:20:22.308 Identify (06h): Supported 00:20:22.308 Abort (08h): Supported 00:20:22.308 Set Features (09h): Supported 00:20:22.308 Get Features (0Ah): Supported 00:20:22.308 Asynchronous Event Request (0Ch): Supported 00:20:22.308 Keep Alive (18h): Supported 00:20:22.308 I/O Commands 00:20:22.308 ------------ 00:20:22.308 Flush (00h): Supported LBA-Change 00:20:22.308 Write (01h): Supported LBA-Change 00:20:22.308 Read (02h): Supported 00:20:22.308 Compare (05h): Supported 00:20:22.308 Write Zeroes (08h): Supported LBA-Change 00:20:22.308 Dataset Management (09h): Supported LBA-Change 00:20:22.308 Copy (19h): Supported LBA-Change 00:20:22.308 00:20:22.308 Error Log 00:20:22.308 ========= 00:20:22.308 00:20:22.308 Arbitration 00:20:22.308 =========== 00:20:22.308 Arbitration Burst: 1 00:20:22.308 00:20:22.308 Power Management 00:20:22.308 ================ 00:20:22.308 Number of Power States: 1 00:20:22.308 Current Power State: Power State #0 00:20:22.308 Power State #0: 00:20:22.308 Max Power: 0.00 W 00:20:22.308 Non-Operational State: Operational 00:20:22.308 Entry Latency: Not Reported 00:20:22.308 Exit Latency: Not Reported 00:20:22.308 Relative Read Throughput: 0 00:20:22.308 Relative Read Latency: 0 00:20:22.308 Relative Write Throughput: 0 00:20:22.308 Relative Write Latency: 0 00:20:22.308 Idle Power: Not Reported 00:20:22.308 Active Power: Not Reported 00:20:22.308 Non-Operational Permissive Mode: Not Supported 00:20:22.308 00:20:22.308 Health Information 00:20:22.308 ================== 00:20:22.308 Critical Warnings: 00:20:22.308 Available Spare Space: OK 00:20:22.308 Temperature: OK 00:20:22.308 Device Reliability: OK 00:20:22.308 Read Only: No 00:20:22.308 Volatile Memory Backup: OK 00:20:22.308 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:22.308 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:22.308 Available Spare: 0% 00:20:22.308 Available Sp[2024-11-28 12:50:52.409176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:22.308 [2024-11-28 12:50:52.409186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:22.308 [2024-11-28 12:50:52.409207] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:20:22.308 [2024-11-28 12:50:52.409214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.308 [2024-11-28 12:50:52.409218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.308 [2024-11-28 12:50:52.409223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.308 [2024-11-28 12:50:52.409227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.308 [2024-11-28 12:50:52.411164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:22.308 [2024-11-28 12:50:52.411172] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:22.308 [2024-11-28 12:50:52.411478] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:22.308 [2024-11-28 12:50:52.411516] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:20:22.308 [2024-11-28 12:50:52.411521] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:20:22.308 [2024-11-28 12:50:52.412482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:22.308 [2024-11-28 12:50:52.412490] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:20:22.308 [2024-11-28 12:50:52.412540] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:22.308 [2024-11-28 12:50:52.413494] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:22.568 are Threshold: 0% 00:20:22.568 Life Percentage Used: 0% 00:20:22.568 Data Units Read: 0 00:20:22.568 Data Units Written: 0 00:20:22.568 Host Read Commands: 0 00:20:22.568 Host Write Commands: 0 00:20:22.568 Controller Busy Time: 0 minutes 00:20:22.568 Power Cycles: 0 00:20:22.568 Power On Hours: 0 hours 00:20:22.568 Unsafe Shutdowns: 0 00:20:22.568 Unrecoverable Media Errors: 0 00:20:22.568 Lifetime Error Log Entries: 0 00:20:22.568 Warning Temperature Time: 0 minutes 00:20:22.568 Critical Temperature Time: 0 minutes 00:20:22.568 00:20:22.568 Number of Queues 00:20:22.568 ================ 00:20:22.568 Number of I/O Submission Queues: 127 00:20:22.568 Number of I/O Completion Queues: 127 00:20:22.568 00:20:22.568 Active Namespaces 00:20:22.568 ================= 00:20:22.568 Namespace ID:1 00:20:22.568 Error Recovery Timeout: Unlimited 00:20:22.568 Command Set Identifier: NVM (00h) 00:20:22.568 Deallocate: Supported 00:20:22.568 Deallocated/Unwritten Error: Not Supported 00:20:22.568 Deallocated Read Value: Unknown 00:20:22.568 Deallocate in Write Zeroes: Not Supported 00:20:22.568 Deallocated Guard Field: 0xFFFF 00:20:22.568 Flush: Supported 00:20:22.568 Reservation: Supported 00:20:22.568 Namespace Sharing Capabilities: Multiple Controllers 00:20:22.568 Size (in LBAs): 131072 (0GiB) 00:20:22.568 Capacity (in LBAs): 131072 (0GiB) 00:20:22.568 Utilization (in LBAs): 131072 (0GiB) 00:20:22.568 NGUID: 9A78598DCB6F46EF8333CFC9F52DF32B 00:20:22.568 UUID: 9a78598d-cb6f-46ef-8333-cfc9f52df32b 00:20:22.568 Thin Provisioning: Not Supported 00:20:22.568 Per-NS Atomic Units: Yes 00:20:22.568 Atomic Boundary Size (Normal): 0 00:20:22.568 Atomic Boundary Size (PFail): 0 00:20:22.568 Atomic Boundary Offset: 0 00:20:22.568 Maximum Single Source Range Length: 65535 00:20:22.568 Maximum Copy Length: 65535 00:20:22.568 Maximum Source Range Count: 1 00:20:22.568 NGUID/EUI64 Never Reused: No 00:20:22.568 Namespace Write Protected: No 00:20:22.568 Number of LBA Formats: 1 00:20:22.568 Current LBA Format: LBA Format #00 00:20:22.568 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:22.568 00:20:22.568 12:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:22.568 [2024-11-28 12:50:52.692481] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:27.847 Initializing NVMe Controllers 00:20:27.847 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:27.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:27.848 Initialization complete. Launching workers. 00:20:27.848 ======================================================== 00:20:27.848 Latency(us) 00:20:27.848 Device Information : IOPS MiB/s Average min max 00:20:27.848 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39851.19 155.67 3211.83 866.44 10783.65 00:20:27.848 ======================================================== 00:20:27.848 Total : 39851.19 155.67 3211.83 866.44 10783.65 00:20:27.848 00:20:27.848 [2024-11-28 12:50:57.698015] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:27.848 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:28.108 [2024-11-28 12:50:57.991448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:33.390 Initializing NVMe Controllers 00:20:33.390 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:33.390 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:33.390 Initialization complete. Launching workers. 00:20:33.390 ======================================================== 00:20:33.390 Latency(us) 00:20:33.390 Device Information : IOPS MiB/s Average min max 00:20:33.390 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7994.58 6687.97 9001.94 00:20:33.390 ======================================================== 00:20:33.390 Total : 16025.60 62.60 7994.58 6687.97 9001.94 00:20:33.390 00:20:33.390 [2024-11-28 12:51:03.017476] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:33.390 12:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:33.390 [2024-11-28 12:51:03.329922] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:38.674 [2024-11-28 12:51:08.376293] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:38.674 Initializing NVMe Controllers 00:20:38.674 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:38.674 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:38.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:20:38.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:20:38.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:20:38.674 Initialization complete. Launching workers. 00:20:38.674 Starting thread on core 2 00:20:38.674 Starting thread on core 3 00:20:38.674 Starting thread on core 1 00:20:38.674 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:20:38.674 [2024-11-28 12:51:08.725545] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:41.975 [2024-11-28 12:51:11.771366] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:41.975 Initializing NVMe Controllers 00:20:41.975 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:41.975 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:41.975 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:20:41.975 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:20:41.975 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:20:41.975 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:20:41.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:41.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:41.975 Initialization complete. Launching workers. 00:20:41.975 Starting thread on core 1 with urgent priority queue 00:20:41.975 Starting thread on core 2 with urgent priority queue 00:20:41.975 Starting thread on core 3 with urgent priority queue 00:20:41.975 Starting thread on core 0 with urgent priority queue 00:20:41.975 SPDK bdev Controller (SPDK1 ) core 0: 10382.33 IO/s 9.63 secs/100000 ios 00:20:41.975 SPDK bdev Controller (SPDK1 ) core 1: 13926.33 IO/s 7.18 secs/100000 ios 00:20:41.975 SPDK bdev Controller (SPDK1 ) core 2: 9411.00 IO/s 10.63 secs/100000 ios 00:20:41.975 SPDK bdev Controller (SPDK1 ) core 3: 13397.33 IO/s 7.46 secs/100000 ios 00:20:41.975 ======================================================== 00:20:41.975 00:20:41.975 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:42.237 [2024-11-28 12:51:12.120516] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:42.237 Initializing NVMe Controllers 00:20:42.237 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:42.237 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:42.237 Namespace ID: 1 size: 0GB 00:20:42.237 Initialization complete. 00:20:42.237 INFO: using host memory buffer for IO 00:20:42.237 Hello world! 00:20:42.237 [2024-11-28 12:51:12.156658] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:42.237 12:51:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:42.497 [2024-11-28 12:51:12.498410] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:43.437 Initializing NVMe Controllers 00:20:43.437 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:43.437 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:43.437 Initialization complete. Launching workers. 00:20:43.437 submit (in ns) avg, min, max = 5973.1, 2824.1, 4008340.3 00:20:43.437 complete (in ns) avg, min, max = 17904.3, 1638.0, 8005395.1 00:20:43.437 00:20:43.437 Submit histogram 00:20:43.437 ================ 00:20:43.437 Range in us Cumulative Count 00:20:43.437 2.820 - 2.833: 0.1698% ( 34) 00:20:43.437 2.833 - 2.847: 0.7342% ( 113) 00:20:43.437 2.847 - 2.860: 2.6071% ( 375) 00:20:43.437 2.860 - 2.873: 7.1122% ( 902) 00:20:43.437 2.873 - 2.887: 12.7410% ( 1127) 00:20:43.437 2.887 - 2.900: 19.5635% ( 1366) 00:20:43.437 2.900 - 2.913: 26.3161% ( 1352) 00:20:43.437 2.913 - 2.927: 31.3955% ( 1017) 00:20:43.437 2.927 - 2.940: 36.5947% ( 1041) 00:20:43.437 2.940 - 2.954: 41.9838% ( 1079) 00:20:43.437 2.954 - 2.967: 48.0821% ( 1221) 00:20:43.437 2.967 - 2.980: 53.8358% ( 1152) 00:20:43.437 2.980 - 2.994: 61.2876% ( 1492) 00:20:43.437 2.994 - 3.007: 69.1839% ( 1581) 00:20:43.437 3.007 - 3.020: 77.6046% ( 1686) 00:20:43.437 3.020 - 3.034: 85.1164% ( 1504) 00:20:43.437 3.034 - 3.047: 91.0249% ( 1183) 00:20:43.437 3.047 - 3.060: 95.3401% ( 864) 00:20:43.437 3.060 - 3.074: 97.6576% ( 464) 00:20:43.437 3.074 - 3.087: 98.7714% ( 223) 00:20:43.437 3.087 - 3.101: 99.3158% ( 109) 00:20:43.437 3.101 - 3.114: 99.4506% ( 27) 00:20:43.437 3.114 - 3.127: 99.5405% ( 18) 00:20:43.437 3.127 - 3.141: 99.5755% ( 7) 00:20:43.437 3.141 - 3.154: 99.5905% ( 3) 00:20:43.437 3.368 - 3.381: 99.5954% ( 1) 00:20:43.437 3.448 - 3.475: 99.6004% ( 1) 00:20:43.437 3.502 - 3.528: 99.6054% ( 1) 00:20:43.437 3.555 - 3.582: 99.6104% ( 1) 00:20:43.437 3.849 - 3.876: 99.6154% ( 1) 00:20:43.437 4.090 - 4.116: 99.6204% ( 1) 00:20:43.437 4.223 - 4.250: 99.6254% ( 1) 00:20:43.437 4.410 - 4.437: 99.6304% ( 1) 00:20:43.437 4.437 - 4.464: 99.6354% ( 1) 00:20:43.437 4.571 - 4.597: 99.6404% ( 1) 00:20:43.437 4.704 - 4.731: 99.6454% ( 1) 00:20:43.437 4.731 - 4.758: 99.6604% ( 3) 00:20:43.437 4.758 - 4.784: 99.6654% ( 1) 00:20:43.437 4.838 - 4.865: 99.6704% ( 1) 00:20:43.437 4.891 - 4.918: 99.6804% ( 2) 00:20:43.437 4.918 - 4.945: 99.6953% ( 3) 00:20:43.437 4.945 - 4.972: 99.7053% ( 2) 00:20:43.437 4.972 - 4.998: 99.7103% ( 1) 00:20:43.437 4.998 - 5.025: 99.7253% ( 3) 00:20:43.437 5.052 - 5.079: 99.7303% ( 1) 00:20:43.437 5.079 - 5.105: 99.7453% ( 3) 00:20:43.437 5.105 - 5.132: 99.7503% ( 1) 00:20:43.437 5.185 - 5.212: 99.7553% ( 1) 00:20:43.437 5.319 - 5.346: 99.7603% ( 1) 00:20:43.437 5.533 - 5.560: 99.7653% ( 1) 00:20:43.437 5.667 - 5.693: 99.7703% ( 1) 00:20:43.437 5.693 - 5.720: 99.7752% ( 1) 00:20:43.437 5.720 - 5.747: 99.7802% ( 1) 00:20:43.437 5.747 - 5.773: 99.7852% ( 1) 00:20:43.437 5.800 - 5.827: 99.7952% ( 2) 00:20:43.437 5.827 - 5.854: 99.8002% ( 1) 00:20:43.437 5.934 - 5.961: 99.8152% ( 3) 00:20:43.437 6.014 - 6.041: 99.8202% ( 1) 00:20:43.437 6.067 - 6.094: 99.8302% ( 2) 00:20:43.437 6.174 - 6.201: 99.8352% ( 1) 00:20:43.437 6.201 - 6.228: 99.8402% ( 1) 00:20:43.437 6.415 - 6.442: 99.8502% ( 2) 00:20:43.437 6.495 - 6.522: 99.8602% ( 2) 00:20:43.437 6.549 - 6.575: 99.8701% ( 2) 00:20:43.437 6.709 - 6.736: 99.8801% ( 2) 00:20:43.437 6.789 - 6.816: 99.8851% ( 1) 00:20:43.437 6.896 - 6.950: 99.8901% ( 1) 00:20:43.438 7.003 - 7.056: 99.8951% ( 1) 00:20:43.438 7.110 - 7.163: 99.9001% ( 1) 00:20:43.438 7.270 - 7.324: 99.9051% ( 1) 00:20:43.438 7.377 - 7.431: 99.9101% ( 1) 00:20:43.438 7.591 - 7.645: 99.9151% ( 1) 00:20:43.438 8.019 - 8.072: 99.9201% ( 1) 00:20:43.438 8.821 - 8.874: 99.9251% ( 1) 00:20:43.438 3996.098 - 4023.468: 100.0000% ( 15) 00:20:43.438 00:20:43.438 Complete histogram 00:20:43.438 ================== 00:20:43.438 Range in us Cumulative Count 00:20:43.438 1.637 - 1.644: 0.1548% ( 31) 00:20:43.438 1.644 - 1.651: 1.1038% ( 190) 00:20:43.438 1.651 - [2024-11-28 12:51:13.515501] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:43.438 1.657: 1.1887% ( 17) 00:20:43.438 1.657 - 1.664: 1.2786% ( 18) 00:20:43.438 1.664 - 1.671: 1.3485% ( 14) 00:20:43.438 1.671 - 1.677: 1.3785% ( 6) 00:20:43.438 1.677 - 1.684: 9.6494% ( 1656) 00:20:43.438 1.684 - 1.691: 54.1554% ( 8911) 00:20:43.438 1.691 - 1.697: 59.2598% ( 1022) 00:20:43.438 1.697 - 1.704: 68.9542% ( 1941) 00:20:43.438 1.704 - 1.711: 76.8105% ( 1573) 00:20:43.438 1.711 - 1.724: 82.5991% ( 1159) 00:20:43.438 1.724 - 1.737: 83.7129% ( 223) 00:20:43.438 1.737 - 1.751: 87.7385% ( 806) 00:20:43.438 1.751 - 1.764: 93.4872% ( 1151) 00:20:43.438 1.764 - 1.777: 97.2131% ( 746) 00:20:43.438 1.777 - 1.791: 98.9162% ( 341) 00:20:43.438 1.791 - 1.804: 99.3607% ( 89) 00:20:43.438 1.804 - 1.818: 99.4406% ( 16) 00:20:43.438 1.818 - 1.831: 99.4506% ( 2) 00:20:43.438 1.831 - 1.844: 99.4606% ( 2) 00:20:43.438 1.844 - 1.858: 99.4656% ( 1) 00:20:43.438 3.288 - 3.301: 99.4706% ( 1) 00:20:43.438 3.354 - 3.368: 99.4756% ( 1) 00:20:43.438 3.502 - 3.528: 99.4806% ( 1) 00:20:43.438 3.742 - 3.769: 99.4856% ( 1) 00:20:43.438 3.849 - 3.876: 99.4906% ( 1) 00:20:43.438 4.170 - 4.196: 99.4956% ( 1) 00:20:43.438 4.277 - 4.303: 99.5005% ( 1) 00:20:43.438 4.330 - 4.357: 99.5105% ( 2) 00:20:43.438 4.490 - 4.517: 99.5155% ( 1) 00:20:43.438 4.517 - 4.544: 99.5205% ( 1) 00:20:43.438 4.544 - 4.571: 99.5255% ( 1) 00:20:43.438 4.597 - 4.624: 99.5305% ( 1) 00:20:43.438 4.651 - 4.678: 99.5355% ( 1) 00:20:43.438 4.865 - 4.891: 99.5405% ( 1) 00:20:43.438 4.918 - 4.945: 99.5455% ( 1) 00:20:43.438 5.079 - 5.105: 99.5505% ( 1) 00:20:43.438 5.132 - 5.159: 99.5605% ( 2) 00:20:43.438 5.185 - 5.212: 99.5655% ( 1) 00:20:43.438 5.212 - 5.239: 99.5705% ( 1) 00:20:43.438 5.292 - 5.319: 99.5755% ( 1) 00:20:43.438 5.373 - 5.399: 99.5805% ( 1) 00:20:43.438 5.720 - 5.747: 99.5855% ( 1) 00:20:43.438 5.773 - 5.800: 99.5905% ( 1) 00:20:43.438 6.094 - 6.121: 99.5954% ( 1) 00:20:43.438 8.767 - 8.821: 99.6004% ( 1) 00:20:43.438 9.462 - 9.516: 99.6054% ( 1) 00:20:43.438 10.531 - 10.585: 99.6104% ( 1) 00:20:43.438 3996.098 - 4023.468: 99.9850% ( 75) 00:20:43.438 7992.195 - 8046.936: 100.0000% ( 3) 00:20:43.438 00:20:43.438 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:20:43.438 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:43.438 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:20:43.438 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:20:43.438 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:43.698 [ 00:20:43.698 { 00:20:43.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:43.698 "subtype": "Discovery", 00:20:43.698 "listen_addresses": [], 00:20:43.698 "allow_any_host": true, 00:20:43.698 "hosts": [] 00:20:43.698 }, 00:20:43.698 { 00:20:43.698 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:43.698 "subtype": "NVMe", 00:20:43.698 "listen_addresses": [ 00:20:43.698 { 00:20:43.698 "trtype": "VFIOUSER", 00:20:43.698 "adrfam": "IPv4", 00:20:43.698 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:43.698 "trsvcid": "0" 00:20:43.698 } 00:20:43.698 ], 00:20:43.698 "allow_any_host": true, 00:20:43.698 "hosts": [], 00:20:43.698 "serial_number": "SPDK1", 00:20:43.698 "model_number": "SPDK bdev Controller", 00:20:43.698 "max_namespaces": 32, 00:20:43.698 "min_cntlid": 1, 00:20:43.698 "max_cntlid": 65519, 00:20:43.698 "namespaces": [ 00:20:43.699 { 00:20:43.699 "nsid": 1, 00:20:43.699 "bdev_name": "Malloc1", 00:20:43.699 "name": "Malloc1", 00:20:43.699 "nguid": "9A78598DCB6F46EF8333CFC9F52DF32B", 00:20:43.699 "uuid": "9a78598d-cb6f-46ef-8333-cfc9f52df32b" 00:20:43.699 } 00:20:43.699 ] 00:20:43.699 }, 00:20:43.699 { 00:20:43.699 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:43.699 "subtype": "NVMe", 00:20:43.699 "listen_addresses": [ 00:20:43.699 { 00:20:43.699 "trtype": "VFIOUSER", 00:20:43.699 "adrfam": "IPv4", 00:20:43.699 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:43.699 "trsvcid": "0" 00:20:43.699 } 00:20:43.699 ], 00:20:43.699 "allow_any_host": true, 00:20:43.699 "hosts": [], 00:20:43.699 "serial_number": "SPDK2", 00:20:43.699 "model_number": "SPDK bdev Controller", 00:20:43.699 "max_namespaces": 32, 00:20:43.699 "min_cntlid": 1, 00:20:43.699 "max_cntlid": 65519, 00:20:43.699 "namespaces": [ 00:20:43.699 { 00:20:43.699 "nsid": 1, 00:20:43.699 "bdev_name": "Malloc2", 00:20:43.699 "name": "Malloc2", 00:20:43.699 "nguid": "619A0D0EED7B4B2C9F1D0F37C39D7B3F", 00:20:43.699 "uuid": "619a0d0e-ed7b-4b2c-9f1d-0f37c39d7b3f" 00:20:43.699 } 00:20:43.699 ] 00:20:43.699 } 00:20:43.699 ] 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3385227 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:43.699 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:20:43.960 Malloc3 00:20:43.960 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:20:43.960 [2024-11-28 12:51:14.000421] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:44.222 [2024-11-28 12:51:14.092832] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:44.222 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:44.222 Asynchronous Event Request test 00:20:44.222 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:44.222 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:44.222 Registering asynchronous event callbacks... 00:20:44.222 Starting namespace attribute notice tests for all controllers... 00:20:44.222 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:44.222 aer_cb - Changed Namespace 00:20:44.222 Cleaning up... 00:20:44.222 [ 00:20:44.222 { 00:20:44.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:44.222 "subtype": "Discovery", 00:20:44.222 "listen_addresses": [], 00:20:44.222 "allow_any_host": true, 00:20:44.222 "hosts": [] 00:20:44.222 }, 00:20:44.222 { 00:20:44.222 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:44.222 "subtype": "NVMe", 00:20:44.222 "listen_addresses": [ 00:20:44.222 { 00:20:44.222 "trtype": "VFIOUSER", 00:20:44.222 "adrfam": "IPv4", 00:20:44.222 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:44.222 "trsvcid": "0" 00:20:44.222 } 00:20:44.222 ], 00:20:44.222 "allow_any_host": true, 00:20:44.222 "hosts": [], 00:20:44.222 "serial_number": "SPDK1", 00:20:44.222 "model_number": "SPDK bdev Controller", 00:20:44.222 "max_namespaces": 32, 00:20:44.222 "min_cntlid": 1, 00:20:44.222 "max_cntlid": 65519, 00:20:44.222 "namespaces": [ 00:20:44.222 { 00:20:44.222 "nsid": 1, 00:20:44.222 "bdev_name": "Malloc1", 00:20:44.222 "name": "Malloc1", 00:20:44.222 "nguid": "9A78598DCB6F46EF8333CFC9F52DF32B", 00:20:44.222 "uuid": "9a78598d-cb6f-46ef-8333-cfc9f52df32b" 00:20:44.222 }, 00:20:44.222 { 00:20:44.222 "nsid": 2, 00:20:44.222 "bdev_name": "Malloc3", 00:20:44.222 "name": "Malloc3", 00:20:44.222 "nguid": "8D7490BBDD77429294BA1BE26F6D05AF", 00:20:44.222 "uuid": "8d7490bb-dd77-4292-94ba-1be26f6d05af" 00:20:44.222 } 00:20:44.222 ] 00:20:44.222 }, 00:20:44.222 { 00:20:44.222 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:44.222 "subtype": "NVMe", 00:20:44.222 "listen_addresses": [ 00:20:44.222 { 00:20:44.222 "trtype": "VFIOUSER", 00:20:44.222 "adrfam": "IPv4", 00:20:44.222 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:44.222 "trsvcid": "0" 00:20:44.222 } 00:20:44.222 ], 00:20:44.222 "allow_any_host": true, 00:20:44.222 "hosts": [], 00:20:44.222 "serial_number": "SPDK2", 00:20:44.222 "model_number": "SPDK bdev Controller", 00:20:44.222 "max_namespaces": 32, 00:20:44.222 "min_cntlid": 1, 00:20:44.222 "max_cntlid": 65519, 00:20:44.222 "namespaces": [ 00:20:44.222 { 00:20:44.222 "nsid": 1, 00:20:44.222 "bdev_name": "Malloc2", 00:20:44.222 "name": "Malloc2", 00:20:44.222 "nguid": "619A0D0EED7B4B2C9F1D0F37C39D7B3F", 00:20:44.222 "uuid": "619a0d0e-ed7b-4b2c-9f1d-0f37c39d7b3f" 00:20:44.222 } 00:20:44.222 ] 00:20:44.222 } 00:20:44.222 ] 00:20:44.222 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3385227 00:20:44.222 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:44.222 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:44.222 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:20:44.222 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:44.222 [2024-11-28 12:51:14.333084] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:44.222 [2024-11-28 12:51:14.333127] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385504 ] 00:20:44.484 [2024-11-28 12:51:14.448490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:44.484 [2024-11-28 12:51:14.480757] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:20:44.484 [2024-11-28 12:51:14.488304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:44.484 [2024-11-28 12:51:14.488319] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff552724000 00:20:44.484 [2024-11-28 12:51:14.489305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.490308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.491311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.492311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.493316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.494322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.495327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.496330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:44.484 [2024-11-28 12:51:14.497347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:44.484 [2024-11-28 12:51:14.497355] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff551425000 00:20:44.484 [2024-11-28 12:51:14.498266] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:44.484 [2024-11-28 12:51:14.507628] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:20:44.484 [2024-11-28 12:51:14.507646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:20:44.484 [2024-11-28 12:51:14.512701] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:44.484 [2024-11-28 12:51:14.512735] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:44.484 [2024-11-28 12:51:14.512792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:20:44.484 [2024-11-28 12:51:14.512803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:20:44.484 [2024-11-28 12:51:14.512807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:20:44.484 [2024-11-28 12:51:14.513705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:20:44.484 [2024-11-28 12:51:14.513713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:20:44.484 [2024-11-28 12:51:14.513718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:20:44.484 [2024-11-28 12:51:14.514709] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:44.484 [2024-11-28 12:51:14.514715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:20:44.484 [2024-11-28 12:51:14.514720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:44.484 [2024-11-28 12:51:14.515716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:20:44.484 [2024-11-28 12:51:14.515723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:44.484 [2024-11-28 12:51:14.516718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:20:44.484 [2024-11-28 12:51:14.516724] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:44.484 [2024-11-28 12:51:14.516727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:44.484 [2024-11-28 12:51:14.516732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:44.484 [2024-11-28 12:51:14.516836] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:20:44.484 [2024-11-28 12:51:14.516839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:44.484 [2024-11-28 12:51:14.516843] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003e4000 00:20:44.484 [2024-11-28 12:51:14.517724] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003e2000 00:20:44.484 [2024-11-28 12:51:14.518727] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:20:44.484 [2024-11-28 12:51:14.519729] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:44.484 [2024-11-28 12:51:14.520727] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:44.484 [2024-11-28 12:51:14.520755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:44.484 [2024-11-28 12:51:14.521737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:20:44.484 [2024-11-28 12:51:14.521743] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:44.484 [2024-11-28 12:51:14.521746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:44.484 [2024-11-28 12:51:14.521761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:20:44.484 [2024-11-28 12:51:14.521766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:44.484 [2024-11-28 12:51:14.521777] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:4096 00:20:44.484 [2024-11-28 12:51:14.521780] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:44.484 [2024-11-28 12:51:14.521783] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.484 [2024-11-28 12:51:14.521793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:44.484 [2024-11-28 12:51:14.529165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:44.484 [2024-11-28 12:51:14.529173] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:20:44.484 [2024-11-28 12:51:14.529176] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:20:44.484 [2024-11-28 12:51:14.529180] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:20:44.484 [2024-11-28 12:51:14.529183] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:44.484 [2024-11-28 12:51:14.529186] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:20:44.484 [2024-11-28 12:51:14.529190] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:20:44.484 [2024-11-28 12:51:14.529193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:20:44.484 [2024-11-28 12:51:14.529199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.529206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.537174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.485 [2024-11-28 12:51:14.537181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.485 [2024-11-28 12:51:14.537187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.485 [2024-11-28 12:51:14.537194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:44.485 [2024-11-28 12:51:14.537198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.537204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.537211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.545162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.545168] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:20:44.485 [2024-11-28 12:51:14.545172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.545178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.545182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.545188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.553169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.553215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.553221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.553226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031d000 len:4096 00:20:44.485 [2024-11-28 12:51:14.553229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031d000 00:20:44.485 [2024-11-28 12:51:14.553232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.485 [2024-11-28 12:51:14.553237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x20000031d000 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.561162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.561171] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:20:44.485 [2024-11-28 12:51:14.561181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.561186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.561191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:4096 00:20:44.485 [2024-11-28 12:51:14.561194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:44.485 [2024-11-28 12:51:14.561197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.485 [2024-11-28 12:51:14.561201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.569161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.569170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.569175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.569180] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:4096 00:20:44.485 [2024-11-28 12:51:14.569183] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:44.485 [2024-11-28 12:51:14.569185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.485 [2024-11-28 12:51:14.569190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.577162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.577171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577196] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:44.485 [2024-11-28 12:51:14.577199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:20:44.485 [2024-11-28 12:51:14.577203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:20:44.485 [2024-11-28 12:51:14.577215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.585163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.585172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.593161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.593171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:44.485 [2024-11-28 12:51:14.601162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:44.485 [2024-11-28 12:51:14.601172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:44.746 [2024-11-28 12:51:14.609163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:44.746 [2024-11-28 12:51:14.609178] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031a000 len:8192 00:20:44.746 [2024-11-28 12:51:14.609181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031a000 00:20:44.746 [2024-11-28 12:51:14.609184] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x20000031b000 00:20:44.746 [2024-11-28 12:51:14.609186] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x20000031b000 00:20:44.746 [2024-11-28 12:51:14.609189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:44.746 [2024-11-28 12:51:14.609193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x20000031a000 PRP2 0x20000031b000 00:20:44.746 [2024-11-28 12:51:14.609199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x200000320000 len:512 00:20:44.746 [2024-11-28 12:51:14.609202] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x200000320000 00:20:44.746 [2024-11-28 12:51:14.609204] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.746 [2024-11-28 12:51:14.609209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x200000320000 PRP2 0x0 00:20:44.746 [2024-11-28 12:51:14.609214] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x20000031f000 len:512 00:20:44.746 [2024-11-28 12:51:14.609217] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x20000031f000 00:20:44.746 [2024-11-28 12:51:14.609219] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.746 [2024-11-28 12:51:14.609223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x20000031f000 PRP2 0x0 00:20:44.746 [2024-11-28 12:51:14.609229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x200000318000 len:4096 00:20:44.746 [2024-11-28 12:51:14.609231] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x200000318000 00:20:44.746 [2024-11-28 12:51:14.609234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:44.746 [2024-11-28 12:51:14.609238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x200000318000 PRP2 0x0 00:20:44.746 [2024-11-28 12:51:14.617162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:44.746 [2024-11-28 12:51:14.617172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:44.746 [2024-11-28 12:51:14.617179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:44.746 [2024-11-28 12:51:14.617184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:44.746 ===================================================== 00:20:44.746 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:44.746 ===================================================== 00:20:44.746 Controller Capabilities/Features 00:20:44.746 ================================ 00:20:44.746 Vendor ID: 4e58 00:20:44.746 Subsystem Vendor ID: 4e58 00:20:44.746 Serial Number: SPDK2 00:20:44.746 Model Number: SPDK bdev Controller 00:20:44.746 Firmware Version: 25.01 00:20:44.746 Recommended Arb Burst: 6 00:20:44.746 IEEE OUI Identifier: 8d 6b 50 00:20:44.746 Multi-path I/O 00:20:44.746 May have multiple subsystem ports: Yes 00:20:44.746 May have multiple controllers: Yes 00:20:44.746 Associated with SR-IOV VF: No 00:20:44.746 Max Data Transfer Size: 131072 00:20:44.746 Max Number of Namespaces: 32 00:20:44.746 Max Number of I/O Queues: 127 00:20:44.746 NVMe Specification Version (VS): 1.3 00:20:44.746 NVMe Specification Version (Identify): 1.3 00:20:44.746 Maximum Queue Entries: 256 00:20:44.746 Contiguous Queues Required: Yes 00:20:44.746 Arbitration Mechanisms Supported 00:20:44.746 Weighted Round Robin: Not Supported 00:20:44.746 Vendor Specific: Not Supported 00:20:44.746 Reset Timeout: 15000 ms 00:20:44.746 Doorbell Stride: 4 bytes 00:20:44.746 NVM Subsystem Reset: Not Supported 00:20:44.746 Command Sets Supported 00:20:44.746 NVM Command Set: Supported 00:20:44.746 Boot Partition: Not Supported 00:20:44.746 Memory Page Size Minimum: 4096 bytes 00:20:44.746 Memory Page Size Maximum: 4096 bytes 00:20:44.746 Persistent Memory Region: Not Supported 00:20:44.746 Optional Asynchronous Events Supported 00:20:44.746 Namespace Attribute Notices: Supported 00:20:44.746 Firmware Activation Notices: Not Supported 00:20:44.746 ANA Change Notices: Not Supported 00:20:44.746 PLE Aggregate Log Change Notices: Not Supported 00:20:44.746 LBA Status Info Alert Notices: Not Supported 00:20:44.746 EGE Aggregate Log Change Notices: Not Supported 00:20:44.746 Normal NVM Subsystem Shutdown event: Not Supported 00:20:44.746 Zone Descriptor Change Notices: Not Supported 00:20:44.746 Discovery Log Change Notices: Not Supported 00:20:44.746 Controller Attributes 00:20:44.746 128-bit Host Identifier: Supported 00:20:44.746 Non-Operational Permissive Mode: Not Supported 00:20:44.746 NVM Sets: Not Supported 00:20:44.746 Read Recovery Levels: Not Supported 00:20:44.746 Endurance Groups: Not Supported 00:20:44.746 Predictable Latency Mode: Not Supported 00:20:44.746 Traffic Based Keep ALive: Not Supported 00:20:44.746 Namespace Granularity: Not Supported 00:20:44.746 SQ Associations: Not Supported 00:20:44.746 UUID List: Not Supported 00:20:44.746 Multi-Domain Subsystem: Not Supported 00:20:44.746 Fixed Capacity Management: Not Supported 00:20:44.746 Variable Capacity Management: Not Supported 00:20:44.746 Delete Endurance Group: Not Supported 00:20:44.746 Delete NVM Set: Not Supported 00:20:44.746 Extended LBA Formats Supported: Not Supported 00:20:44.746 Flexible Data Placement Supported: Not Supported 00:20:44.746 00:20:44.746 Controller Memory Buffer Support 00:20:44.746 ================================ 00:20:44.746 Supported: No 00:20:44.746 00:20:44.746 Persistent Memory Region Support 00:20:44.746 ================================ 00:20:44.747 Supported: No 00:20:44.747 00:20:44.747 Admin Command Set Attributes 00:20:44.747 ============================ 00:20:44.747 Security Send/Receive: Not Supported 00:20:44.747 Format NVM: Not Supported 00:20:44.747 Firmware Activate/Download: Not Supported 00:20:44.747 Namespace Management: Not Supported 00:20:44.747 Device Self-Test: Not Supported 00:20:44.747 Directives: Not Supported 00:20:44.747 NVMe-MI: Not Supported 00:20:44.747 Virtualization Management: Not Supported 00:20:44.747 Doorbell Buffer Config: Not Supported 00:20:44.747 Get LBA Status Capability: Not Supported 00:20:44.747 Command & Feature Lockdown Capability: Not Supported 00:20:44.747 Abort Command Limit: 4 00:20:44.747 Async Event Request Limit: 4 00:20:44.747 Number of Firmware Slots: N/A 00:20:44.747 Firmware Slot 1 Read-Only: N/A 00:20:44.747 Firmware Activation Without Reset: N/A 00:20:44.747 Multiple Update Detection Support: N/A 00:20:44.747 Firmware Update Granularity: No Information Provided 00:20:44.747 Per-Namespace SMART Log: No 00:20:44.747 Asymmetric Namespace Access Log Page: Not Supported 00:20:44.747 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:20:44.747 Command Effects Log Page: Supported 00:20:44.747 Get Log Page Extended Data: Supported 00:20:44.747 Telemetry Log Pages: Not Supported 00:20:44.747 Persistent Event Log Pages: Not Supported 00:20:44.747 Supported Log Pages Log Page: May Support 00:20:44.747 Commands Supported & Effects Log Page: Not Supported 00:20:44.747 Feature Identifiers & Effects Log Page:May Support 00:20:44.747 NVMe-MI Commands & Effects Log Page: May Support 00:20:44.747 Data Area 4 for Telemetry Log: Not Supported 00:20:44.747 Error Log Page Entries Supported: 128 00:20:44.747 Keep Alive: Supported 00:20:44.747 Keep Alive Granularity: 10000 ms 00:20:44.747 00:20:44.747 NVM Command Set Attributes 00:20:44.747 ========================== 00:20:44.747 Submission Queue Entry Size 00:20:44.747 Max: 64 00:20:44.747 Min: 64 00:20:44.747 Completion Queue Entry Size 00:20:44.747 Max: 16 00:20:44.747 Min: 16 00:20:44.747 Number of Namespaces: 32 00:20:44.747 Compare Command: Supported 00:20:44.747 Write Uncorrectable Command: Not Supported 00:20:44.747 Dataset Management Command: Supported 00:20:44.747 Write Zeroes Command: Supported 00:20:44.747 Set Features Save Field: Not Supported 00:20:44.747 Reservations: Not Supported 00:20:44.747 Timestamp: Not Supported 00:20:44.747 Copy: Supported 00:20:44.747 Volatile Write Cache: Present 00:20:44.747 Atomic Write Unit (Normal): 1 00:20:44.747 Atomic Write Unit (PFail): 1 00:20:44.747 Atomic Compare & Write Unit: 1 00:20:44.747 Fused Compare & Write: Supported 00:20:44.747 Scatter-Gather List 00:20:44.747 SGL Command Set: Supported (Dword aligned) 00:20:44.747 SGL Keyed: Not Supported 00:20:44.747 SGL Bit Bucket Descriptor: Not Supported 00:20:44.747 SGL Metadata Pointer: Not Supported 00:20:44.747 Oversized SGL: Not Supported 00:20:44.747 SGL Metadata Address: Not Supported 00:20:44.747 SGL Offset: Not Supported 00:20:44.747 Transport SGL Data Block: Not Supported 00:20:44.747 Replay Protected Memory Block: Not Supported 00:20:44.747 00:20:44.747 Firmware Slot Information 00:20:44.747 ========================= 00:20:44.747 Active slot: 1 00:20:44.747 Slot 1 Firmware Revision: 25.01 00:20:44.747 00:20:44.747 00:20:44.747 Commands Supported and Effects 00:20:44.747 ============================== 00:20:44.747 Admin Commands 00:20:44.747 -------------- 00:20:44.747 Get Log Page (02h): Supported 00:20:44.747 Identify (06h): Supported 00:20:44.747 Abort (08h): Supported 00:20:44.747 Set Features (09h): Supported 00:20:44.747 Get Features (0Ah): Supported 00:20:44.747 Asynchronous Event Request (0Ch): Supported 00:20:44.747 Keep Alive (18h): Supported 00:20:44.747 I/O Commands 00:20:44.747 ------------ 00:20:44.747 Flush (00h): Supported LBA-Change 00:20:44.747 Write (01h): Supported LBA-Change 00:20:44.747 Read (02h): Supported 00:20:44.747 Compare (05h): Supported 00:20:44.747 Write Zeroes (08h): Supported LBA-Change 00:20:44.747 Dataset Management (09h): Supported LBA-Change 00:20:44.747 Copy (19h): Supported LBA-Change 00:20:44.747 00:20:44.747 Error Log 00:20:44.747 ========= 00:20:44.747 00:20:44.747 Arbitration 00:20:44.747 =========== 00:20:44.747 Arbitration Burst: 1 00:20:44.747 00:20:44.747 Power Management 00:20:44.747 ================ 00:20:44.747 Number of Power States: 1 00:20:44.747 Current Power State: Power State #0 00:20:44.747 Power State #0: 00:20:44.747 Max Power: 0.00 W 00:20:44.747 Non-Operational State: Operational 00:20:44.747 Entry Latency: Not Reported 00:20:44.747 Exit Latency: Not Reported 00:20:44.747 Relative Read Throughput: 0 00:20:44.747 Relative Read Latency: 0 00:20:44.747 Relative Write Throughput: 0 00:20:44.747 Relative Write Latency: 0 00:20:44.747 Idle Power: Not Reported 00:20:44.747 Active Power: Not Reported 00:20:44.747 Non-Operational Permissive Mode: Not Supported 00:20:44.747 00:20:44.747 Health Information 00:20:44.747 ================== 00:20:44.747 Critical Warnings: 00:20:44.747 Available Spare Space: OK 00:20:44.747 Temperature: OK 00:20:44.747 Device Reliability: OK 00:20:44.747 Read Only: No 00:20:44.747 Volatile Memory Backup: OK 00:20:44.747 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:44.747 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:44.747 Available Spare: 0% 00:20:44.747 Available Sp[2024-11-28 12:51:14.617255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:44.747 [2024-11-28 12:51:14.625163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:44.747 [2024-11-28 12:51:14.625186] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:20:44.747 [2024-11-28 12:51:14.625192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.747 [2024-11-28 12:51:14.625197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.747 [2024-11-28 12:51:14.625201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.747 [2024-11-28 12:51:14.625206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.747 [2024-11-28 12:51:14.625243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:44.747 [2024-11-28 12:51:14.625250] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:20:44.747 [2024-11-28 12:51:14.626246] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:44.747 [2024-11-28 12:51:14.626280] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:20:44.747 [2024-11-28 12:51:14.626285] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:20:44.747 [2024-11-28 12:51:14.627247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:20:44.747 [2024-11-28 12:51:14.627255] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:20:44.747 [2024-11-28 12:51:14.627294] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:20:44.747 [2024-11-28 12:51:14.628315] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:44.747 are Threshold: 0% 00:20:44.747 Life Percentage Used: 0% 00:20:44.747 Data Units Read: 0 00:20:44.747 Data Units Written: 0 00:20:44.747 Host Read Commands: 0 00:20:44.747 Host Write Commands: 0 00:20:44.747 Controller Busy Time: 0 minutes 00:20:44.747 Power Cycles: 0 00:20:44.747 Power On Hours: 0 hours 00:20:44.747 Unsafe Shutdowns: 0 00:20:44.747 Unrecoverable Media Errors: 0 00:20:44.747 Lifetime Error Log Entries: 0 00:20:44.747 Warning Temperature Time: 0 minutes 00:20:44.747 Critical Temperature Time: 0 minutes 00:20:44.747 00:20:44.747 Number of Queues 00:20:44.747 ================ 00:20:44.747 Number of I/O Submission Queues: 127 00:20:44.747 Number of I/O Completion Queues: 127 00:20:44.747 00:20:44.747 Active Namespaces 00:20:44.747 ================= 00:20:44.747 Namespace ID:1 00:20:44.747 Error Recovery Timeout: Unlimited 00:20:44.747 Command Set Identifier: NVM (00h) 00:20:44.747 Deallocate: Supported 00:20:44.747 Deallocated/Unwritten Error: Not Supported 00:20:44.747 Deallocated Read Value: Unknown 00:20:44.747 Deallocate in Write Zeroes: Not Supported 00:20:44.747 Deallocated Guard Field: 0xFFFF 00:20:44.747 Flush: Supported 00:20:44.747 Reservation: Supported 00:20:44.747 Namespace Sharing Capabilities: Multiple Controllers 00:20:44.747 Size (in LBAs): 131072 (0GiB) 00:20:44.747 Capacity (in LBAs): 131072 (0GiB) 00:20:44.747 Utilization (in LBAs): 131072 (0GiB) 00:20:44.747 NGUID: 619A0D0EED7B4B2C9F1D0F37C39D7B3F 00:20:44.747 UUID: 619a0d0e-ed7b-4b2c-9f1d-0f37c39d7b3f 00:20:44.747 Thin Provisioning: Not Supported 00:20:44.747 Per-NS Atomic Units: Yes 00:20:44.747 Atomic Boundary Size (Normal): 0 00:20:44.747 Atomic Boundary Size (PFail): 0 00:20:44.747 Atomic Boundary Offset: 0 00:20:44.747 Maximum Single Source Range Length: 65535 00:20:44.747 Maximum Copy Length: 65535 00:20:44.747 Maximum Source Range Count: 1 00:20:44.747 NGUID/EUI64 Never Reused: No 00:20:44.747 Namespace Write Protected: No 00:20:44.747 Number of LBA Formats: 1 00:20:44.747 Current LBA Format: LBA Format #00 00:20:44.747 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:44.747 00:20:44.747 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:45.007 [2024-11-28 12:51:14.916911] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:50.456 Initializing NVMe Controllers 00:20:50.456 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:50.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:50.456 Initialization complete. Launching workers. 00:20:50.456 ======================================================== 00:20:50.456 Latency(us) 00:20:50.456 Device Information : IOPS MiB/s Average min max 00:20:50.456 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39983.40 156.19 3201.38 862.45 9775.90 00:20:50.456 ======================================================== 00:20:50.456 Total : 39983.40 156.19 3201.38 862.45 9775.90 00:20:50.456 00:20:50.456 [2024-11-28 12:51:20.005469] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:50.456 12:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:50.456 [2024-11-28 12:51:20.302694] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:55.750 Initializing NVMe Controllers 00:20:55.750 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:55.750 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:55.750 Initialization complete. Launching workers. 00:20:55.750 ======================================================== 00:20:55.750 Latency(us) 00:20:55.750 Device Information : IOPS MiB/s Average min max 00:20:55.750 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39876.60 155.77 3209.75 858.42 6831.84 00:20:55.750 ======================================================== 00:20:55.750 Total : 39876.60 155.77 3209.75 858.42 6831.84 00:20:55.750 00:20:55.750 [2024-11-28 12:51:25.311201] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:55.750 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:55.750 [2024-11-28 12:51:25.611959] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:01.041 [2024-11-28 12:51:30.728238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:01.041 Initializing NVMe Controllers 00:21:01.041 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:01.041 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:01.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:21:01.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:21:01.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:21:01.041 Initialization complete. Launching workers. 00:21:01.041 Starting thread on core 2 00:21:01.041 Starting thread on core 3 00:21:01.041 Starting thread on core 1 00:21:01.041 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:21:01.041 [2024-11-28 12:51:31.078198] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:04.362 [2024-11-28 12:51:34.142880] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:04.362 Initializing NVMe Controllers 00:21:04.362 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:04.362 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:04.362 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:21:04.362 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:21:04.362 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:21:04.362 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:21:04.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:04.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:04.362 Initialization complete. Launching workers. 00:21:04.362 Starting thread on core 1 with urgent priority queue 00:21:04.362 Starting thread on core 2 with urgent priority queue 00:21:04.362 Starting thread on core 3 with urgent priority queue 00:21:04.362 Starting thread on core 0 with urgent priority queue 00:21:04.362 SPDK bdev Controller (SPDK2 ) core 0: 12929.67 IO/s 7.73 secs/100000 ios 00:21:04.362 SPDK bdev Controller (SPDK2 ) core 1: 8292.33 IO/s 12.06 secs/100000 ios 00:21:04.362 SPDK bdev Controller (SPDK2 ) core 2: 9390.33 IO/s 10.65 secs/100000 ios 00:21:04.362 SPDK bdev Controller (SPDK2 ) core 3: 13054.00 IO/s 7.66 secs/100000 ios 00:21:04.362 ======================================================== 00:21:04.362 00:21:04.362 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:04.362 [2024-11-28 12:51:34.487086] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:04.624 Initializing NVMe Controllers 00:21:04.624 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:04.624 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:04.624 Namespace ID: 1 size: 0GB 00:21:04.624 Initialization complete. 00:21:04.624 INFO: using host memory buffer for IO 00:21:04.624 Hello world! 00:21:04.624 [2024-11-28 12:51:34.496121] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:04.624 12:51:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:04.885 [2024-11-28 12:51:34.839736] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:05.838 Initializing NVMe Controllers 00:21:05.838 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:05.838 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:05.838 Initialization complete. Launching workers. 00:21:05.838 submit (in ns) avg, min, max = 5577.7, 2845.0, 4008245.1 00:21:05.838 complete (in ns) avg, min, max = 17581.3, 1643.0, 4008558.3 00:21:05.838 00:21:05.838 Submit histogram 00:21:05.838 ================ 00:21:05.838 Range in us Cumulative Count 00:21:05.838 2.833 - 2.847: 0.0099% ( 2) 00:21:05.838 2.847 - 2.860: 0.5449% ( 108) 00:21:05.838 2.860 - 2.873: 2.1895% ( 332) 00:21:05.838 2.873 - 2.887: 4.9636% ( 560) 00:21:05.838 2.887 - 2.900: 10.1848% ( 1054) 00:21:05.838 2.900 - 2.913: 16.1985% ( 1214) 00:21:05.838 2.913 - 2.927: 20.7014% ( 909) 00:21:05.838 2.927 - 2.940: 26.2694% ( 1124) 00:21:05.838 2.940 - 2.954: 32.0553% ( 1168) 00:21:05.838 2.954 - 2.967: 37.5638% ( 1112) 00:21:05.838 2.967 - 2.980: 42.7899% ( 1055) 00:21:05.838 2.980 - 2.994: 48.0260% ( 1057) 00:21:05.838 2.994 - 3.007: 53.7128% ( 1148) 00:21:05.838 3.007 - 3.020: 61.9161% ( 1656) 00:21:05.838 3.020 - 3.034: 70.5652% ( 1746) 00:21:05.838 3.034 - 3.047: 78.5159% ( 1605) 00:21:05.838 3.047 - 3.060: 85.3074% ( 1371) 00:21:05.838 3.060 - 3.074: 90.5830% ( 1065) 00:21:05.838 3.074 - 3.087: 94.4321% ( 777) 00:21:05.838 3.087 - 3.101: 96.9584% ( 510) 00:21:05.838 3.101 - 3.114: 98.4891% ( 309) 00:21:05.838 3.114 - 3.127: 99.0737% ( 118) 00:21:05.838 3.127 - 3.141: 99.3808% ( 62) 00:21:05.838 3.141 - 3.154: 99.4997% ( 24) 00:21:05.838 3.154 - 3.167: 99.5542% ( 11) 00:21:05.838 3.167 - 3.181: 99.5839% ( 6) 00:21:05.838 3.181 - 3.194: 99.5988% ( 3) 00:21:05.838 3.368 - 3.381: 99.6037% ( 1) 00:21:05.838 3.715 - 3.742: 99.6087% ( 1) 00:21:05.838 3.769 - 3.796: 99.6136% ( 1) 00:21:05.838 4.063 - 4.090: 99.6186% ( 1) 00:21:05.838 4.090 - 4.116: 99.6235% ( 1) 00:21:05.838 4.116 - 4.143: 99.6285% ( 1) 00:21:05.838 4.170 - 4.196: 99.6334% ( 1) 00:21:05.838 4.277 - 4.303: 99.6384% ( 1) 00:21:05.838 4.330 - 4.357: 99.6433% ( 1) 00:21:05.838 4.384 - 4.410: 99.6483% ( 1) 00:21:05.838 4.571 - 4.597: 99.6582% ( 2) 00:21:05.838 4.597 - 4.624: 99.6631% ( 1) 00:21:05.838 4.624 - 4.651: 99.6681% ( 1) 00:21:05.838 4.651 - 4.678: 99.6731% ( 1) 00:21:05.838 4.678 - 4.704: 99.6780% ( 1) 00:21:05.838 4.784 - 4.811: 99.6879% ( 2) 00:21:05.838 4.811 - 4.838: 99.6929% ( 1) 00:21:05.838 4.891 - 4.918: 99.6978% ( 1) 00:21:05.838 4.945 - 4.972: 99.7077% ( 2) 00:21:05.838 4.972 - 4.998: 99.7176% ( 2) 00:21:05.838 4.998 - 5.025: 99.7226% ( 1) 00:21:05.838 5.052 - 5.079: 99.7275% ( 1) 00:21:05.838 5.105 - 5.132: 99.7325% ( 1) 00:21:05.838 5.132 - 5.159: 99.7474% ( 3) 00:21:05.838 5.346 - 5.373: 99.7523% ( 1) 00:21:05.838 5.586 - 5.613: 99.7573% ( 1) 00:21:05.838 5.613 - 5.640: 99.7622% ( 1) 00:21:05.838 5.747 - 5.773: 99.7672% ( 1) 00:21:05.838 5.773 - 5.800: 99.7820% ( 3) 00:21:05.838 5.800 - 5.827: 99.7870% ( 1) 00:21:05.838 5.907 - 5.934: 99.7919% ( 1) 00:21:05.838 5.934 - 5.961: 99.7969% ( 1) 00:21:05.838 6.067 - 6.094: 99.8019% ( 1) 00:21:05.838 6.148 - 6.174: 99.8068% ( 1) 00:21:05.838 6.174 - 6.201: 99.8118% ( 1) 00:21:05.838 6.255 - 6.281: 99.8167% ( 1) 00:21:05.838 6.308 - 6.335: 99.8217% ( 1) 00:21:05.838 6.335 - 6.362: 99.8316% ( 2) 00:21:05.838 6.362 - 6.388: 99.8365% ( 1) 00:21:05.838 6.495 - 6.522: 99.8464% ( 2) 00:21:05.838 6.709 - 6.736: 99.8514% ( 1) 00:21:05.838 6.816 - 6.843: 99.8613% ( 2) 00:21:05.838 6.896 - 6.950: 99.8663% ( 1) 00:21:05.839 6.950 - 7.003: 99.8712% ( 1) 00:21:05.839 7.110 - 7.163: 99.8811% ( 2) 00:21:05.839 7.217 - 7.270: 99.8861% ( 1) 00:21:05.839 7.324 - 7.377: 99.8960% ( 2) 00:21:05.839 7.591 - 7.645: 99.9009% ( 1) 00:21:05.839 7.698 - 7.751: 99.9059% ( 1) 00:21:05.839 7.805 - 7.858: 99.9108% ( 1) 00:21:05.839 7.858 - 7.912: 99.9158% ( 1) 00:21:05.839 8.126 - 8.179: 99.9207% ( 1) 00:21:05.839 [2024-11-28 12:51:35.933673] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:06.100 8.607 - 8.660: 99.9257% ( 1) 00:21:06.100 8.767 - 8.821: 99.9306% ( 1) 00:21:06.100 9.248 - 9.302: 99.9356% ( 1) 00:21:06.100 3996.098 - 4023.468: 100.0000% ( 13) 00:21:06.100 00:21:06.100 Complete histogram 00:21:06.100 ================== 00:21:06.100 Range in us Cumulative Count 00:21:06.100 1.637 - 1.644: 0.0050% ( 1) 00:21:06.100 1.644 - 1.651: 0.2229% ( 44) 00:21:06.100 1.651 - 1.657: 0.2675% ( 9) 00:21:06.100 1.657 - 1.664: 0.8025% ( 108) 00:21:06.100 1.664 - 1.671: 1.3375% ( 108) 00:21:06.100 1.671 - 1.677: 1.3821% ( 9) 00:21:06.100 1.677 - 1.684: 1.4316% ( 10) 00:21:06.100 1.684 - 1.691: 24.2681% ( 4610) 00:21:06.100 1.691 - 1.697: 56.6850% ( 6544) 00:21:06.100 1.697 - 1.704: 60.5241% ( 775) 00:21:06.100 1.704 - 1.711: 73.2105% ( 2561) 00:21:06.100 1.711 - 1.724: 81.5029% ( 1674) 00:21:06.100 1.724 - 1.737: 83.8312% ( 470) 00:21:06.100 1.737 - 1.751: 85.8671% ( 411) 00:21:06.100 1.751 - 1.764: 90.6673% ( 969) 00:21:06.100 1.764 - 1.777: 95.7349% ( 1023) 00:21:06.100 1.777 - 1.791: 98.1374% ( 485) 00:21:06.100 1.791 - 1.804: 99.1480% ( 204) 00:21:06.100 1.804 - 1.818: 99.4056% ( 52) 00:21:06.100 1.818 - 1.831: 99.4402% ( 7) 00:21:06.100 1.831 - 1.844: 99.4452% ( 1) 00:21:06.100 1.898 - 1.911: 99.4501% ( 1) 00:21:06.100 3.341 - 3.354: 99.4551% ( 1) 00:21:06.100 3.354 - 3.368: 99.4600% ( 1) 00:21:06.100 3.368 - 3.381: 99.4650% ( 1) 00:21:06.100 3.395 - 3.408: 99.4700% ( 1) 00:21:06.100 3.582 - 3.608: 99.4749% ( 1) 00:21:06.100 4.303 - 4.330: 99.4799% ( 1) 00:21:06.100 4.410 - 4.437: 99.4848% ( 1) 00:21:06.100 4.544 - 4.571: 99.4947% ( 2) 00:21:06.100 4.624 - 4.651: 99.4997% ( 1) 00:21:06.100 4.758 - 4.784: 99.5046% ( 1) 00:21:06.100 4.784 - 4.811: 99.5096% ( 1) 00:21:06.100 4.811 - 4.838: 99.5145% ( 1) 00:21:06.100 4.838 - 4.865: 99.5195% ( 1) 00:21:06.100 4.945 - 4.972: 99.5244% ( 1) 00:21:06.100 4.998 - 5.025: 99.5294% ( 1) 00:21:06.100 5.079 - 5.105: 99.5344% ( 1) 00:21:06.100 5.159 - 5.185: 99.5393% ( 1) 00:21:06.100 5.346 - 5.373: 99.5492% ( 2) 00:21:06.100 5.506 - 5.533: 99.5542% ( 1) 00:21:06.100 5.667 - 5.693: 99.5591% ( 1) 00:21:06.100 5.693 - 5.720: 99.5690% ( 2) 00:21:06.100 5.800 - 5.827: 99.5740% ( 1) 00:21:06.100 6.094 - 6.121: 99.5789% ( 1) 00:21:06.100 6.495 - 6.522: 99.5839% ( 1) 00:21:06.100 6.522 - 6.549: 99.5888% ( 1) 00:21:06.100 6.816 - 6.843: 99.5938% ( 1) 00:21:06.100 12.135 - 12.188: 99.5988% ( 1) 00:21:06.100 31.220 - 31.433: 99.6037% ( 1) 00:21:06.100 3996.098 - 4023.468: 100.0000% ( 80) 00:21:06.100 00:21:06.100 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:21:06.100 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:06.100 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:21:06.100 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:21:06.100 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:06.100 [ 00:21:06.100 { 00:21:06.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:06.100 "subtype": "Discovery", 00:21:06.100 "listen_addresses": [], 00:21:06.100 "allow_any_host": true, 00:21:06.100 "hosts": [] 00:21:06.100 }, 00:21:06.100 { 00:21:06.100 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:06.100 "subtype": "NVMe", 00:21:06.100 "listen_addresses": [ 00:21:06.100 { 00:21:06.100 "trtype": "VFIOUSER", 00:21:06.100 "adrfam": "IPv4", 00:21:06.100 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:06.100 "trsvcid": "0" 00:21:06.100 } 00:21:06.100 ], 00:21:06.100 "allow_any_host": true, 00:21:06.100 "hosts": [], 00:21:06.100 "serial_number": "SPDK1", 00:21:06.100 "model_number": "SPDK bdev Controller", 00:21:06.100 "max_namespaces": 32, 00:21:06.100 "min_cntlid": 1, 00:21:06.100 "max_cntlid": 65519, 00:21:06.100 "namespaces": [ 00:21:06.100 { 00:21:06.100 "nsid": 1, 00:21:06.100 "bdev_name": "Malloc1", 00:21:06.100 "name": "Malloc1", 00:21:06.100 "nguid": "9A78598DCB6F46EF8333CFC9F52DF32B", 00:21:06.100 "uuid": "9a78598d-cb6f-46ef-8333-cfc9f52df32b" 00:21:06.100 }, 00:21:06.100 { 00:21:06.100 "nsid": 2, 00:21:06.100 "bdev_name": "Malloc3", 00:21:06.100 "name": "Malloc3", 00:21:06.100 "nguid": "8D7490BBDD77429294BA1BE26F6D05AF", 00:21:06.100 "uuid": "8d7490bb-dd77-4292-94ba-1be26f6d05af" 00:21:06.100 } 00:21:06.100 ] 00:21:06.100 }, 00:21:06.100 { 00:21:06.100 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:06.100 "subtype": "NVMe", 00:21:06.100 "listen_addresses": [ 00:21:06.100 { 00:21:06.100 "trtype": "VFIOUSER", 00:21:06.100 "adrfam": "IPv4", 00:21:06.100 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:06.100 "trsvcid": "0" 00:21:06.100 } 00:21:06.100 ], 00:21:06.100 "allow_any_host": true, 00:21:06.100 "hosts": [], 00:21:06.100 "serial_number": "SPDK2", 00:21:06.100 "model_number": "SPDK bdev Controller", 00:21:06.100 "max_namespaces": 32, 00:21:06.100 "min_cntlid": 1, 00:21:06.100 "max_cntlid": 65519, 00:21:06.100 "namespaces": [ 00:21:06.100 { 00:21:06.100 "nsid": 1, 00:21:06.100 "bdev_name": "Malloc2", 00:21:06.100 "name": "Malloc2", 00:21:06.100 "nguid": "619A0D0EED7B4B2C9F1D0F37C39D7B3F", 00:21:06.100 "uuid": "619a0d0e-ed7b-4b2c-9f1d-0f37c39d7b3f" 00:21:06.100 } 00:21:06.100 ] 00:21:06.100 } 00:21:06.100 ] 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3389574 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:06.100 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:21:06.361 Malloc4 00:21:06.361 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:21:06.361 [2024-11-28 12:51:36.411477] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:06.621 [2024-11-28 12:51:36.510948] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:06.621 Asynchronous Event Request test 00:21:06.621 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:06.621 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:06.621 Registering asynchronous event callbacks... 00:21:06.621 Starting namespace attribute notice tests for all controllers... 00:21:06.621 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:06.621 aer_cb - Changed Namespace 00:21:06.621 Cleaning up... 00:21:06.621 [ 00:21:06.621 { 00:21:06.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:06.621 "subtype": "Discovery", 00:21:06.621 "listen_addresses": [], 00:21:06.621 "allow_any_host": true, 00:21:06.621 "hosts": [] 00:21:06.621 }, 00:21:06.621 { 00:21:06.621 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:06.621 "subtype": "NVMe", 00:21:06.621 "listen_addresses": [ 00:21:06.621 { 00:21:06.621 "trtype": "VFIOUSER", 00:21:06.621 "adrfam": "IPv4", 00:21:06.621 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:06.621 "trsvcid": "0" 00:21:06.621 } 00:21:06.621 ], 00:21:06.621 "allow_any_host": true, 00:21:06.621 "hosts": [], 00:21:06.621 "serial_number": "SPDK1", 00:21:06.621 "model_number": "SPDK bdev Controller", 00:21:06.621 "max_namespaces": 32, 00:21:06.621 "min_cntlid": 1, 00:21:06.621 "max_cntlid": 65519, 00:21:06.621 "namespaces": [ 00:21:06.621 { 00:21:06.621 "nsid": 1, 00:21:06.621 "bdev_name": "Malloc1", 00:21:06.621 "name": "Malloc1", 00:21:06.621 "nguid": "9A78598DCB6F46EF8333CFC9F52DF32B", 00:21:06.621 "uuid": "9a78598d-cb6f-46ef-8333-cfc9f52df32b" 00:21:06.621 }, 00:21:06.621 { 00:21:06.621 "nsid": 2, 00:21:06.621 "bdev_name": "Malloc3", 00:21:06.621 "name": "Malloc3", 00:21:06.621 "nguid": "8D7490BBDD77429294BA1BE26F6D05AF", 00:21:06.621 "uuid": "8d7490bb-dd77-4292-94ba-1be26f6d05af" 00:21:06.621 } 00:21:06.621 ] 00:21:06.621 }, 00:21:06.621 { 00:21:06.621 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:06.621 "subtype": "NVMe", 00:21:06.621 "listen_addresses": [ 00:21:06.621 { 00:21:06.621 "trtype": "VFIOUSER", 00:21:06.621 "adrfam": "IPv4", 00:21:06.621 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:06.621 "trsvcid": "0" 00:21:06.621 } 00:21:06.621 ], 00:21:06.621 "allow_any_host": true, 00:21:06.621 "hosts": [], 00:21:06.621 "serial_number": "SPDK2", 00:21:06.621 "model_number": "SPDK bdev Controller", 00:21:06.621 "max_namespaces": 32, 00:21:06.621 "min_cntlid": 1, 00:21:06.621 "max_cntlid": 65519, 00:21:06.621 "namespaces": [ 00:21:06.621 { 00:21:06.621 "nsid": 1, 00:21:06.621 "bdev_name": "Malloc2", 00:21:06.621 "name": "Malloc2", 00:21:06.621 "nguid": "619A0D0EED7B4B2C9F1D0F37C39D7B3F", 00:21:06.621 "uuid": "619a0d0e-ed7b-4b2c-9f1d-0f37c39d7b3f" 00:21:06.621 }, 00:21:06.621 { 00:21:06.621 "nsid": 2, 00:21:06.621 "bdev_name": "Malloc4", 00:21:06.621 "name": "Malloc4", 00:21:06.621 "nguid": "7D3F502DC2FE48B49474EF7B1F54375C", 00:21:06.621 "uuid": "7d3f502d-c2fe-48b4-9474-ef7b1f54375c" 00:21:06.621 } 00:21:06.621 ] 00:21:06.621 } 00:21:06.621 ] 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3389574 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3379919 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3379919 ']' 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3379919 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.621 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3379919 00:21:06.882 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3379919' 00:21:06.883 killing process with pid 3379919 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3379919 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3379919 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3389907 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3389907' 00:21:06.883 Process pid: 3389907 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3389907 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3389907 ']' 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.883 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:06.883 [2024-11-28 12:51:36.993356] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:21:06.883 [2024-11-28 12:51:36.994292] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:06.883 [2024-11-28 12:51:36.994338] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.143 [2024-11-28 12:51:37.128591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:07.143 [2024-11-28 12:51:37.183084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.143 [2024-11-28 12:51:37.199140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.143 [2024-11-28 12:51:37.199172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.143 [2024-11-28 12:51:37.199178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.143 [2024-11-28 12:51:37.199183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.143 [2024-11-28 12:51:37.199187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.143 [2024-11-28 12:51:37.200699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.143 [2024-11-28 12:51:37.200851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.143 [2024-11-28 12:51:37.201004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.143 [2024-11-28 12:51:37.201007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.143 [2024-11-28 12:51:37.248032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:21:07.143 [2024-11-28 12:51:37.249033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:21:07.143 [2024-11-28 12:51:37.249992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:21:07.143 [2024-11-28 12:51:37.250453] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:21:07.143 [2024-11-28 12:51:37.250469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:21:07.714 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.714 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:21:07.714 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:21:08.654 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:21:08.914 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:21:08.914 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:21:08.914 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:08.914 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:21:08.914 12:51:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:09.175 Malloc1 00:21:09.175 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:21:09.436 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:21:09.436 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:21:09.695 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:09.695 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:21:09.696 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:09.956 Malloc2 00:21:09.956 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:21:10.216 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:21:10.216 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3389907 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3389907 ']' 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3389907 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3389907 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3389907' 00:21:10.477 killing process with pid 3389907 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3389907 00:21:10.477 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3389907 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:10.738 00:21:10.738 real 0m52.266s 00:21:10.738 user 3m20.380s 00:21:10.738 sys 0m2.696s 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:10.738 ************************************ 00:21:10.738 END TEST nvmf_vfio_user 00:21:10.738 ************************************ 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.738 ************************************ 00:21:10.738 START TEST nvmf_vfio_user_nvme_compliance 00:21:10.738 ************************************ 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:10.738 * Looking for test storage... 00:21:10.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.738 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:11.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.001 --rc genhtml_branch_coverage=1 00:21:11.001 --rc genhtml_function_coverage=1 00:21:11.001 --rc genhtml_legend=1 00:21:11.001 --rc geninfo_all_blocks=1 00:21:11.001 --rc geninfo_unexecuted_blocks=1 00:21:11.001 00:21:11.001 ' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:11.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.001 --rc genhtml_branch_coverage=1 00:21:11.001 --rc genhtml_function_coverage=1 00:21:11.001 --rc genhtml_legend=1 00:21:11.001 --rc geninfo_all_blocks=1 00:21:11.001 --rc geninfo_unexecuted_blocks=1 00:21:11.001 00:21:11.001 ' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:11.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.001 --rc genhtml_branch_coverage=1 00:21:11.001 --rc genhtml_function_coverage=1 00:21:11.001 --rc genhtml_legend=1 00:21:11.001 --rc geninfo_all_blocks=1 00:21:11.001 --rc geninfo_unexecuted_blocks=1 00:21:11.001 00:21:11.001 ' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:11.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.001 --rc genhtml_branch_coverage=1 00:21:11.001 --rc genhtml_function_coverage=1 00:21:11.001 --rc genhtml_legend=1 00:21:11.001 --rc geninfo_all_blocks=1 00:21:11.001 --rc geninfo_unexecuted_blocks=1 00:21:11.001 00:21:11.001 ' 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.001 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3390670 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3390670' 00:21:11.002 Process pid: 3390670 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3390670 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3390670 ']' 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.002 12:51:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:11.002 [2024-11-28 12:51:41.029133] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:11.002 [2024-11-28 12:51:41.029210] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.263 [2024-11-28 12:51:41.163999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:11.263 [2024-11-28 12:51:41.219301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.263 [2024-11-28 12:51:41.242038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.263 [2024-11-28 12:51:41.242074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.263 [2024-11-28 12:51:41.242079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.263 [2024-11-28 12:51:41.242084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.263 [2024-11-28 12:51:41.242089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.263 [2024-11-28 12:51:41.243514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.263 [2024-11-28 12:51:41.243670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.263 [2024-11-28 12:51:41.243673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.834 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.834 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:21:11.834 12:51:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:12.774 malloc0 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.774 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:13.035 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.035 12:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:21:13.035 00:21:13.035 00:21:13.035 CUnit - A unit testing framework for C - Version 2.1-3 00:21:13.035 http://cunit.sourceforge.net/ 00:21:13.035 00:21:13.035 00:21:13.035 Suite: nvme_compliance 00:21:13.296 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-28 12:51:43.161349] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.296 [2024-11-28 12:51:43.162645] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:21:13.296 [2024-11-28 12:51:43.162656] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:21:13.296 [2024-11-28 12:51:43.162661] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:21:13.296 [2024-11-28 12:51:43.164355] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.296 passed 00:21:13.296 Test: admin_identify_ctrlr_verify_fused ...[2024-11-28 12:51:43.241685] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.296 [2024-11-28 12:51:43.244692] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.296 passed 00:21:13.296 Test: admin_identify_ns ...[2024-11-28 12:51:43.324075] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.296 [2024-11-28 12:51:43.384168] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:13.296 [2024-11-28 12:51:43.392166] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:21:13.296 [2024-11-28 12:51:43.413245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.557 passed 00:21:13.557 Test: admin_get_features_mandatory_features ...[2024-11-28 12:51:43.484372] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.557 [2024-11-28 12:51:43.487386] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.557 passed 00:21:13.557 Test: admin_get_features_optional_features ...[2024-11-28 12:51:43.566685] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.557 [2024-11-28 12:51:43.569700] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.557 passed 00:21:13.557 Test: admin_set_features_number_of_queues ...[2024-11-28 12:51:43.644282] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.819 [2024-11-28 12:51:43.747247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.819 passed 00:21:13.819 Test: admin_get_log_page_mandatory_logs ...[2024-11-28 12:51:43.823172] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:13.819 [2024-11-28 12:51:43.826185] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:13.819 passed 00:21:13.819 Test: admin_get_log_page_with_lpo ...[2024-11-28 12:51:43.905107] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.079 [2024-11-28 12:51:43.978169] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:21:14.079 [2024-11-28 12:51:43.991198] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.079 passed 00:21:14.079 Test: fabric_property_get ...[2024-11-28 12:51:44.065314] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.079 [2024-11-28 12:51:44.066511] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:21:14.079 [2024-11-28 12:51:44.068327] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.079 passed 00:21:14.079 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-28 12:51:44.142608] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.079 [2024-11-28 12:51:44.143808] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:21:14.079 [2024-11-28 12:51:44.145618] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.079 passed 00:21:14.339 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-28 12:51:44.220200] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.340 [2024-11-28 12:51:44.306164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:14.340 [2024-11-28 12:51:44.322164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:14.340 [2024-11-28 12:51:44.327239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.340 passed 00:21:14.340 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-28 12:51:44.401310] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.340 [2024-11-28 12:51:44.402506] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:21:14.340 [2024-11-28 12:51:44.404320] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.340 passed 00:21:14.600 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-28 12:51:44.481490] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.600 [2024-11-28 12:51:44.557164] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:14.600 [2024-11-28 12:51:44.581168] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:14.600 [2024-11-28 12:51:44.586221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.600 passed 00:21:14.600 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-28 12:51:44.658275] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.600 [2024-11-28 12:51:44.659471] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:21:14.600 [2024-11-28 12:51:44.659488] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:21:14.600 [2024-11-28 12:51:44.661286] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.600 passed 00:21:14.860 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-28 12:51:44.735855] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.860 [2024-11-28 12:51:44.828165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:21:14.860 [2024-11-28 12:51:44.836171] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:21:14.860 [2024-11-28 12:51:44.844166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:21:14.860 [2024-11-28 12:51:44.852162] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:21:14.860 [2024-11-28 12:51:44.881235] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:14.860 passed 00:21:14.860 Test: admin_create_io_sq_verify_pc ...[2024-11-28 12:51:44.955366] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:14.860 [2024-11-28 12:51:44.974170] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:21:15.122 [2024-11-28 12:51:44.991448] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:15.122 passed 00:21:15.122 Test: admin_create_io_qp_max_qps ...[2024-11-28 12:51:45.065720] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:16.062 [2024-11-28 12:51:46.181166] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:21:16.654 [2024-11-28 12:51:46.559219] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:16.654 passed 00:21:16.654 Test: admin_create_io_sq_shared_cq ...[2024-11-28 12:51:46.632484] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:16.654 [2024-11-28 12:51:46.768168] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:16.913 [2024-11-28 12:51:46.805212] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:16.913 passed 00:21:16.913 00:21:16.913 Run Summary: Type Total Ran Passed Failed Inactive 00:21:16.913 suites 1 1 n/a 0 0 00:21:16.913 tests 18 18 18 0 0 00:21:16.913 asserts 360 360 360 0 n/a 00:21:16.913 00:21:16.913 Elapsed time = 1.499 seconds 00:21:16.913 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3390670 00:21:16.913 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3390670 ']' 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3390670 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3390670 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3390670' 00:21:16.914 killing process with pid 3390670 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3390670 00:21:16.914 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3390670 00:21:16.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:21:16.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:16.914 00:21:16.914 real 0m6.281s 00:21:16.914 user 0m17.563s 00:21:16.914 sys 0m0.531s 00:21:16.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:16.914 ************************************ 00:21:16.914 END TEST nvmf_vfio_user_nvme_compliance 00:21:16.914 ************************************ 00:21:17.173 12:51:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:17.173 12:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.173 12:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.173 12:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.173 ************************************ 00:21:17.173 START TEST nvmf_vfio_user_fuzz 00:21:17.173 ************************************ 00:21:17.173 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:17.173 * Looking for test storage... 00:21:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.174 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:21:17.435 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3392051 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3392051' 00:21:17.436 Process pid: 3392051 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3392051 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3392051 ']' 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.436 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:18.375 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.375 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:21:18.375 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 malloc0 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:21:19.316 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:51.449 Fuzzing completed. Shutting down the fuzz application 00:21:51.449 00:21:51.449 Dumping successful admin opcodes: 00:21:51.449 9, 10, 00:21:51.449 Dumping successful io opcodes: 00:21:51.449 0, 00:21:51.449 NS: 0x20000081ef00 I/O qp, Total commands completed: 1437717, total successful commands: 5633, random_seed: 3387549056 00:21:51.449 NS: 0x20000081ef00 admin qp, Total commands completed: 357312, total successful commands: 94, random_seed: 4026537152 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3392051 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3392051 ']' 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3392051 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3392051 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3392051' 00:21:51.449 killing process with pid 3392051 00:21:51.449 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3392051 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3392051 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:51.450 00:21:51.450 real 0m32.795s 00:21:51.450 user 0m37.621s 00:21:51.450 sys 0m24.712s 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.450 ************************************ 00:21:51.450 END TEST nvmf_vfio_user_fuzz 00:21:51.450 ************************************ 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.450 ************************************ 00:21:51.450 START TEST nvmf_auth_target 00:21:51.450 ************************************ 00:21:51.450 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:51.450 * Looking for test storage... 00:21:51.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.450 --rc genhtml_branch_coverage=1 00:21:51.450 --rc genhtml_function_coverage=1 00:21:51.450 --rc genhtml_legend=1 00:21:51.450 --rc geninfo_all_blocks=1 00:21:51.450 --rc geninfo_unexecuted_blocks=1 00:21:51.450 00:21:51.450 ' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.450 --rc genhtml_branch_coverage=1 00:21:51.450 --rc genhtml_function_coverage=1 00:21:51.450 --rc genhtml_legend=1 00:21:51.450 --rc geninfo_all_blocks=1 00:21:51.450 --rc geninfo_unexecuted_blocks=1 00:21:51.450 00:21:51.450 ' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.450 --rc genhtml_branch_coverage=1 00:21:51.450 --rc genhtml_function_coverage=1 00:21:51.450 --rc genhtml_legend=1 00:21:51.450 --rc geninfo_all_blocks=1 00:21:51.450 --rc geninfo_unexecuted_blocks=1 00:21:51.450 00:21:51.450 ' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.450 --rc genhtml_branch_coverage=1 00:21:51.450 --rc genhtml_function_coverage=1 00:21:51.450 --rc genhtml_legend=1 00:21:51.450 --rc geninfo_all_blocks=1 00:21:51.450 --rc geninfo_unexecuted_blocks=1 00:21:51.450 00:21:51.450 ' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.450 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.451 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:58.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:58.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:58.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:58.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.063 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:21:58.064 00:21:58.064 --- 10.0.0.2 ping statistics --- 00:21:58.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.064 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:21:58.064 00:21:58.064 --- 10.0.0.1 ping statistics --- 00:21:58.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.064 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3402007 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3402007 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3402007 ']' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.064 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.636 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3402075 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=af3e60d4d7cda4633b12da726cb9439243493f8e8e069f40 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WtY 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key af3e60d4d7cda4633b12da726cb9439243493f8e8e069f40 0 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 af3e60d4d7cda4633b12da726cb9439243493f8e8e069f40 0 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=af3e60d4d7cda4633b12da726cb9439243493f8e8e069f40 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WtY 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WtY 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.WtY 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=085c8715e4b220a7f37dccedce610af7a0a5fb20acfce4453b170fba2e52bebe 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4vW 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 085c8715e4b220a7f37dccedce610af7a0a5fb20acfce4453b170fba2e52bebe 3 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 085c8715e4b220a7f37dccedce610af7a0a5fb20acfce4453b170fba2e52bebe 3 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=085c8715e4b220a7f37dccedce610af7a0a5fb20acfce4453b170fba2e52bebe 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:58.637 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:58.898 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4vW 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4vW 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.4vW 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8989f51060d78bbf9c0476d5e97cec75 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kpF 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8989f51060d78bbf9c0476d5e97cec75 1 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8989f51060d78bbf9c0476d5e97cec75 1 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8989f51060d78bbf9c0476d5e97cec75 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kpF 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kpF 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kpF 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9cf6ae45f54d907c05172985d03245ed0c29c8f75dcaee92 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uow 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9cf6ae45f54d907c05172985d03245ed0c29c8f75dcaee92 2 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9cf6ae45f54d907c05172985d03245ed0c29c8f75dcaee92 2 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9cf6ae45f54d907c05172985d03245ed0c29c8f75dcaee92 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uow 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uow 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.uow 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b89663e9350c4b5bbeadf3c670073bfda7f2645e43e6bac5 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Wx3 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b89663e9350c4b5bbeadf3c670073bfda7f2645e43e6bac5 2 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b89663e9350c4b5bbeadf3c670073bfda7f2645e43e6bac5 2 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b89663e9350c4b5bbeadf3c670073bfda7f2645e43e6bac5 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Wx3 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Wx3 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Wx3 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=64d91241b7f62761e56c418284e8637e 00:21:58.899 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.muB 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 64d91241b7f62761e56c418284e8637e 1 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 64d91241b7f62761e56c418284e8637e 1 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=64d91241b7f62761e56c418284e8637e 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:58.899 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.muB 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.muB 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.muB 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=14cd7b1c05e43b37f1bdfe8168d4b3f9f029a186541664c8a7c8274fdb1f240d 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UlF 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 14cd7b1c05e43b37f1bdfe8168d4b3f9f029a186541664c8a7c8274fdb1f240d 3 00:21:59.160 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 14cd7b1c05e43b37f1bdfe8168d4b3f9f029a186541664c8a7c8274fdb1f240d 3 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=14cd7b1c05e43b37f1bdfe8168d4b3f9f029a186541664c8a7c8274fdb1f240d 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UlF 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UlF 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.UlF 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3402007 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3402007 ']' 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.161 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3402075 /var/tmp/host.sock 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3402075 ']' 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:59.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WtY 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WtY 00:21:59.422 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WtY 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.4vW ]] 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4vW 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4vW 00:21:59.683 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4vW 00:21:59.943 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:59.943 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kpF 00:21:59.943 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.943 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.943 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.944 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kpF 00:21:59.944 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kpF 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.uow ]] 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uow 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uow 00:21:59.944 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uow 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Wx3 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Wx3 00:22:00.204 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Wx3 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.muB ]] 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.muB 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.muB 00:22:00.465 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.muB 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UlF 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.UlF 00:22:00.726 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.UlF 00:22:00.987 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:22:00.987 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:00.988 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.988 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.988 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:00.988 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.988 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.248 00:22:01.248 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.248 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.248 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.510 { 00:22:01.510 "cntlid": 1, 00:22:01.510 "qid": 0, 00:22:01.510 "state": "enabled", 00:22:01.510 "thread": "nvmf_tgt_poll_group_000", 00:22:01.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.510 "listen_address": { 00:22:01.510 "trtype": "TCP", 00:22:01.510 "adrfam": "IPv4", 00:22:01.510 "traddr": "10.0.0.2", 00:22:01.510 "trsvcid": "4420" 00:22:01.510 }, 00:22:01.510 "peer_address": { 00:22:01.510 "trtype": "TCP", 00:22:01.510 "adrfam": "IPv4", 00:22:01.510 "traddr": "10.0.0.1", 00:22:01.510 "trsvcid": "46112" 00:22:01.510 }, 00:22:01.510 "auth": { 00:22:01.510 "state": "completed", 00:22:01.510 "digest": "sha256", 00:22:01.510 "dhgroup": "null" 00:22:01.510 } 00:22:01.510 } 00:22:01.510 ]' 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.510 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.771 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:01.771 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.712 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.973 00:22:02.973 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.973 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.973 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.235 { 00:22:03.235 "cntlid": 3, 00:22:03.235 "qid": 0, 00:22:03.235 "state": "enabled", 00:22:03.235 "thread": "nvmf_tgt_poll_group_000", 00:22:03.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:03.235 "listen_address": { 00:22:03.235 "trtype": "TCP", 00:22:03.235 "adrfam": "IPv4", 00:22:03.235 "traddr": "10.0.0.2", 00:22:03.235 "trsvcid": "4420" 00:22:03.235 }, 00:22:03.235 "peer_address": { 00:22:03.235 "trtype": "TCP", 00:22:03.235 "adrfam": "IPv4", 00:22:03.235 "traddr": "10.0.0.1", 00:22:03.235 "trsvcid": "46138" 00:22:03.235 }, 00:22:03.235 "auth": { 00:22:03.235 "state": "completed", 00:22:03.235 "digest": "sha256", 00:22:03.235 "dhgroup": "null" 00:22:03.235 } 00:22:03.235 } 00:22:03.235 ]' 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.235 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.496 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:03.496 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:04.067 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.329 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.590 00:22:04.590 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.590 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.590 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.851 { 00:22:04.851 "cntlid": 5, 00:22:04.851 "qid": 0, 00:22:04.851 "state": "enabled", 00:22:04.851 "thread": "nvmf_tgt_poll_group_000", 00:22:04.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.851 "listen_address": { 00:22:04.851 "trtype": "TCP", 00:22:04.851 "adrfam": "IPv4", 00:22:04.851 "traddr": "10.0.0.2", 00:22:04.851 "trsvcid": "4420" 00:22:04.851 }, 00:22:04.851 "peer_address": { 00:22:04.851 "trtype": "TCP", 00:22:04.851 "adrfam": "IPv4", 00:22:04.851 "traddr": "10.0.0.1", 00:22:04.851 "trsvcid": "46172" 00:22:04.851 }, 00:22:04.851 "auth": { 00:22:04.851 "state": "completed", 00:22:04.851 "digest": "sha256", 00:22:04.851 "dhgroup": "null" 00:22:04.851 } 00:22:04.851 } 00:22:04.851 ]' 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.851 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.110 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:05.110 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:05.725 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.011 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.011 00:22:06.011 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.011 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.011 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.272 { 00:22:06.272 "cntlid": 7, 00:22:06.272 "qid": 0, 00:22:06.272 "state": "enabled", 00:22:06.272 "thread": "nvmf_tgt_poll_group_000", 00:22:06.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:06.272 "listen_address": { 00:22:06.272 "trtype": "TCP", 00:22:06.272 "adrfam": "IPv4", 00:22:06.272 "traddr": "10.0.0.2", 00:22:06.272 "trsvcid": "4420" 00:22:06.272 }, 00:22:06.272 "peer_address": { 00:22:06.272 "trtype": "TCP", 00:22:06.272 "adrfam": "IPv4", 00:22:06.272 "traddr": "10.0.0.1", 00:22:06.272 "trsvcid": "60176" 00:22:06.272 }, 00:22:06.272 "auth": { 00:22:06.272 "state": "completed", 00:22:06.272 "digest": "sha256", 00:22:06.272 "dhgroup": "null" 00:22:06.272 } 00:22:06.272 } 00:22:06.272 ]' 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.272 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:06.532 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.474 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.734 00:22:07.734 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.734 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.734 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.995 { 00:22:07.995 "cntlid": 9, 00:22:07.995 "qid": 0, 00:22:07.995 "state": "enabled", 00:22:07.995 "thread": "nvmf_tgt_poll_group_000", 00:22:07.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:07.995 "listen_address": { 00:22:07.995 "trtype": "TCP", 00:22:07.995 "adrfam": "IPv4", 00:22:07.995 "traddr": "10.0.0.2", 00:22:07.995 "trsvcid": "4420" 00:22:07.995 }, 00:22:07.995 "peer_address": { 00:22:07.995 "trtype": "TCP", 00:22:07.995 "adrfam": "IPv4", 00:22:07.995 "traddr": "10.0.0.1", 00:22:07.995 "trsvcid": "60200" 00:22:07.995 }, 00:22:07.995 "auth": { 00:22:07.995 "state": "completed", 00:22:07.995 "digest": "sha256", 00:22:07.995 "dhgroup": "ffdhe2048" 00:22:07.995 } 00:22:07.995 } 00:22:07.995 ]' 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.995 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.995 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.995 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.995 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.995 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.995 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.256 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:08.256 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:08.828 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.088 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.349 00:22:09.350 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.350 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.350 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.611 { 00:22:09.611 "cntlid": 11, 00:22:09.611 "qid": 0, 00:22:09.611 "state": "enabled", 00:22:09.611 "thread": "nvmf_tgt_poll_group_000", 00:22:09.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.611 "listen_address": { 00:22:09.611 "trtype": "TCP", 00:22:09.611 "adrfam": "IPv4", 00:22:09.611 "traddr": "10.0.0.2", 00:22:09.611 "trsvcid": "4420" 00:22:09.611 }, 00:22:09.611 "peer_address": { 00:22:09.611 "trtype": "TCP", 00:22:09.611 "adrfam": "IPv4", 00:22:09.611 "traddr": "10.0.0.1", 00:22:09.611 "trsvcid": "60230" 00:22:09.611 }, 00:22:09.611 "auth": { 00:22:09.611 "state": "completed", 00:22:09.611 "digest": "sha256", 00:22:09.611 "dhgroup": "ffdhe2048" 00:22:09.611 } 00:22:09.611 } 00:22:09.611 ]' 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.611 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.873 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:09.873 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:10.445 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.445 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.445 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.445 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.445 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.707 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.967 00:22:10.967 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.967 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.967 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.228 { 00:22:11.228 "cntlid": 13, 00:22:11.228 "qid": 0, 00:22:11.228 "state": "enabled", 00:22:11.228 "thread": "nvmf_tgt_poll_group_000", 00:22:11.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:11.228 "listen_address": { 00:22:11.228 "trtype": "TCP", 00:22:11.228 "adrfam": "IPv4", 00:22:11.228 "traddr": "10.0.0.2", 00:22:11.228 "trsvcid": "4420" 00:22:11.228 }, 00:22:11.228 "peer_address": { 00:22:11.228 "trtype": "TCP", 00:22:11.228 "adrfam": "IPv4", 00:22:11.228 "traddr": "10.0.0.1", 00:22:11.228 "trsvcid": "60258" 00:22:11.228 }, 00:22:11.228 "auth": { 00:22:11.228 "state": "completed", 00:22:11.228 "digest": "sha256", 00:22:11.228 "dhgroup": "ffdhe2048" 00:22:11.228 } 00:22:11.228 } 00:22:11.228 ]' 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.228 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.489 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:11.489 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:12.062 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.062 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.062 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.062 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.323 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.584 00:22:12.584 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.584 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.584 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.844 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.844 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.844 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.845 { 00:22:12.845 "cntlid": 15, 00:22:12.845 "qid": 0, 00:22:12.845 "state": "enabled", 00:22:12.845 "thread": "nvmf_tgt_poll_group_000", 00:22:12.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:12.845 "listen_address": { 00:22:12.845 "trtype": "TCP", 00:22:12.845 "adrfam": "IPv4", 00:22:12.845 "traddr": "10.0.0.2", 00:22:12.845 "trsvcid": "4420" 00:22:12.845 }, 00:22:12.845 "peer_address": { 00:22:12.845 "trtype": "TCP", 00:22:12.845 "adrfam": "IPv4", 00:22:12.845 "traddr": "10.0.0.1", 00:22:12.845 "trsvcid": "60272" 00:22:12.845 }, 00:22:12.845 "auth": { 00:22:12.845 "state": "completed", 00:22:12.845 "digest": "sha256", 00:22:12.845 "dhgroup": "ffdhe2048" 00:22:12.845 } 00:22:12.845 } 00:22:12.845 ]' 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.845 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.105 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:13.105 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:13.675 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.935 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.195 00:22:14.195 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.195 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.195 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.455 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.455 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.455 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.455 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.455 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.455 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.455 { 00:22:14.455 "cntlid": 17, 00:22:14.455 "qid": 0, 00:22:14.455 "state": "enabled", 00:22:14.455 "thread": "nvmf_tgt_poll_group_000", 00:22:14.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:14.456 "listen_address": { 00:22:14.456 "trtype": "TCP", 00:22:14.456 "adrfam": "IPv4", 00:22:14.456 "traddr": "10.0.0.2", 00:22:14.456 "trsvcid": "4420" 00:22:14.456 }, 00:22:14.456 "peer_address": { 00:22:14.456 "trtype": "TCP", 00:22:14.456 "adrfam": "IPv4", 00:22:14.456 "traddr": "10.0.0.1", 00:22:14.456 "trsvcid": "60306" 00:22:14.456 }, 00:22:14.456 "auth": { 00:22:14.456 "state": "completed", 00:22:14.456 "digest": "sha256", 00:22:14.456 "dhgroup": "ffdhe3072" 00:22:14.456 } 00:22:14.456 } 00:22:14.456 ]' 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.456 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.717 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:14.717 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.288 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.548 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.808 00:22:15.808 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.808 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.808 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.069 { 00:22:16.069 "cntlid": 19, 00:22:16.069 "qid": 0, 00:22:16.069 "state": "enabled", 00:22:16.069 "thread": "nvmf_tgt_poll_group_000", 00:22:16.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:16.069 "listen_address": { 00:22:16.069 "trtype": "TCP", 00:22:16.069 "adrfam": "IPv4", 00:22:16.069 "traddr": "10.0.0.2", 00:22:16.069 "trsvcid": "4420" 00:22:16.069 }, 00:22:16.069 "peer_address": { 00:22:16.069 "trtype": "TCP", 00:22:16.069 "adrfam": "IPv4", 00:22:16.069 "traddr": "10.0.0.1", 00:22:16.069 "trsvcid": "60026" 00:22:16.069 }, 00:22:16.069 "auth": { 00:22:16.069 "state": "completed", 00:22:16.069 "digest": "sha256", 00:22:16.069 "dhgroup": "ffdhe3072" 00:22:16.069 } 00:22:16.069 } 00:22:16.069 ]' 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.069 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.331 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:16.331 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:16.901 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.161 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.421 00:22:17.421 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.421 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.421 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.681 { 00:22:17.681 "cntlid": 21, 00:22:17.681 "qid": 0, 00:22:17.681 "state": "enabled", 00:22:17.681 "thread": "nvmf_tgt_poll_group_000", 00:22:17.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:17.681 "listen_address": { 00:22:17.681 "trtype": "TCP", 00:22:17.681 "adrfam": "IPv4", 00:22:17.681 "traddr": "10.0.0.2", 00:22:17.681 "trsvcid": "4420" 00:22:17.681 }, 00:22:17.681 "peer_address": { 00:22:17.681 "trtype": "TCP", 00:22:17.681 "adrfam": "IPv4", 00:22:17.681 "traddr": "10.0.0.1", 00:22:17.681 "trsvcid": "60054" 00:22:17.681 }, 00:22:17.681 "auth": { 00:22:17.681 "state": "completed", 00:22:17.681 "digest": "sha256", 00:22:17.681 "dhgroup": "ffdhe3072" 00:22:17.681 } 00:22:17.681 } 00:22:17.681 ]' 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:17.681 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.942 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.942 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.942 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.942 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:17.942 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.882 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.143 00:22:19.143 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.143 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.143 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.403 { 00:22:19.403 "cntlid": 23, 00:22:19.403 "qid": 0, 00:22:19.403 "state": "enabled", 00:22:19.403 "thread": "nvmf_tgt_poll_group_000", 00:22:19.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:19.403 "listen_address": { 00:22:19.403 "trtype": "TCP", 00:22:19.403 "adrfam": "IPv4", 00:22:19.403 "traddr": "10.0.0.2", 00:22:19.403 "trsvcid": "4420" 00:22:19.403 }, 00:22:19.403 "peer_address": { 00:22:19.403 "trtype": "TCP", 00:22:19.403 "adrfam": "IPv4", 00:22:19.403 "traddr": "10.0.0.1", 00:22:19.403 "trsvcid": "60074" 00:22:19.403 }, 00:22:19.403 "auth": { 00:22:19.403 "state": "completed", 00:22:19.403 "digest": "sha256", 00:22:19.403 "dhgroup": "ffdhe3072" 00:22:19.403 } 00:22:19.403 } 00:22:19.403 ]' 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.403 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.685 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:19.685 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:20.257 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.517 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.777 00:22:20.777 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.777 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.777 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.041 { 00:22:21.041 "cntlid": 25, 00:22:21.041 "qid": 0, 00:22:21.041 "state": "enabled", 00:22:21.041 "thread": "nvmf_tgt_poll_group_000", 00:22:21.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:21.041 "listen_address": { 00:22:21.041 "trtype": "TCP", 00:22:21.041 "adrfam": "IPv4", 00:22:21.041 "traddr": "10.0.0.2", 00:22:21.041 "trsvcid": "4420" 00:22:21.041 }, 00:22:21.041 "peer_address": { 00:22:21.041 "trtype": "TCP", 00:22:21.041 "adrfam": "IPv4", 00:22:21.041 "traddr": "10.0.0.1", 00:22:21.041 "trsvcid": "60098" 00:22:21.041 }, 00:22:21.041 "auth": { 00:22:21.041 "state": "completed", 00:22:21.041 "digest": "sha256", 00:22:21.041 "dhgroup": "ffdhe4096" 00:22:21.041 } 00:22:21.041 } 00:22:21.041 ]' 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:21.041 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.041 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.041 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.041 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.041 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.042 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.302 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:21.302 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:21.873 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.134 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.393 00:22:22.393 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.394 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.394 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.655 { 00:22:22.655 "cntlid": 27, 00:22:22.655 "qid": 0, 00:22:22.655 "state": "enabled", 00:22:22.655 "thread": "nvmf_tgt_poll_group_000", 00:22:22.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:22.655 "listen_address": { 00:22:22.655 "trtype": "TCP", 00:22:22.655 "adrfam": "IPv4", 00:22:22.655 "traddr": "10.0.0.2", 00:22:22.655 "trsvcid": "4420" 00:22:22.655 }, 00:22:22.655 "peer_address": { 00:22:22.655 "trtype": "TCP", 00:22:22.655 "adrfam": "IPv4", 00:22:22.655 "traddr": "10.0.0.1", 00:22:22.655 "trsvcid": "60108" 00:22:22.655 }, 00:22:22.655 "auth": { 00:22:22.655 "state": "completed", 00:22:22.655 "digest": "sha256", 00:22:22.655 "dhgroup": "ffdhe4096" 00:22:22.655 } 00:22:22.655 } 00:22:22.655 ]' 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.655 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.916 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:22.916 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.488 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.749 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.010 00:22:24.010 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.010 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.010 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.271 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.271 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.272 { 00:22:24.272 "cntlid": 29, 00:22:24.272 "qid": 0, 00:22:24.272 "state": "enabled", 00:22:24.272 "thread": "nvmf_tgt_poll_group_000", 00:22:24.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.272 "listen_address": { 00:22:24.272 "trtype": "TCP", 00:22:24.272 "adrfam": "IPv4", 00:22:24.272 "traddr": "10.0.0.2", 00:22:24.272 "trsvcid": "4420" 00:22:24.272 }, 00:22:24.272 "peer_address": { 00:22:24.272 "trtype": "TCP", 00:22:24.272 "adrfam": "IPv4", 00:22:24.272 "traddr": "10.0.0.1", 00:22:24.272 "trsvcid": "60130" 00:22:24.272 }, 00:22:24.272 "auth": { 00:22:24.272 "state": "completed", 00:22:24.272 "digest": "sha256", 00:22:24.272 "dhgroup": "ffdhe4096" 00:22:24.272 } 00:22:24.272 } 00:22:24.272 ]' 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.272 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.532 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:24.532 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.105 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.370 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.631 00:22:25.631 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.631 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.631 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.892 { 00:22:25.892 "cntlid": 31, 00:22:25.892 "qid": 0, 00:22:25.892 "state": "enabled", 00:22:25.892 "thread": "nvmf_tgt_poll_group_000", 00:22:25.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:25.892 "listen_address": { 00:22:25.892 "trtype": "TCP", 00:22:25.892 "adrfam": "IPv4", 00:22:25.892 "traddr": "10.0.0.2", 00:22:25.892 "trsvcid": "4420" 00:22:25.892 }, 00:22:25.892 "peer_address": { 00:22:25.892 "trtype": "TCP", 00:22:25.892 "adrfam": "IPv4", 00:22:25.892 "traddr": "10.0.0.1", 00:22:25.892 "trsvcid": "52548" 00:22:25.892 }, 00:22:25.892 "auth": { 00:22:25.892 "state": "completed", 00:22:25.892 "digest": "sha256", 00:22:25.892 "dhgroup": "ffdhe4096" 00:22:25.892 } 00:22:25.892 } 00:22:25.892 ]' 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.892 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.153 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:26.153 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:26.724 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:26.985 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:26.985 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:26.985 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.985 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:26.985 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.985 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.985 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.986 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.558 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.558 { 00:22:27.558 "cntlid": 33, 00:22:27.558 "qid": 0, 00:22:27.558 "state": "enabled", 00:22:27.558 "thread": "nvmf_tgt_poll_group_000", 00:22:27.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:27.558 "listen_address": { 00:22:27.558 "trtype": "TCP", 00:22:27.558 "adrfam": "IPv4", 00:22:27.558 "traddr": "10.0.0.2", 00:22:27.558 "trsvcid": "4420" 00:22:27.558 }, 00:22:27.558 "peer_address": { 00:22:27.558 "trtype": "TCP", 00:22:27.558 "adrfam": "IPv4", 00:22:27.558 "traddr": "10.0.0.1", 00:22:27.558 "trsvcid": "52580" 00:22:27.558 }, 00:22:27.558 "auth": { 00:22:27.558 "state": "completed", 00:22:27.558 "digest": "sha256", 00:22:27.558 "dhgroup": "ffdhe6144" 00:22:27.558 } 00:22:27.558 } 00:22:27.558 ]' 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:27.558 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.819 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.819 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.819 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.819 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.819 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.080 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:28.080 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.651 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.911 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.172 00:22:29.172 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.172 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.172 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.435 { 00:22:29.435 "cntlid": 35, 00:22:29.435 "qid": 0, 00:22:29.435 "state": "enabled", 00:22:29.435 "thread": "nvmf_tgt_poll_group_000", 00:22:29.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:29.435 "listen_address": { 00:22:29.435 "trtype": "TCP", 00:22:29.435 "adrfam": "IPv4", 00:22:29.435 "traddr": "10.0.0.2", 00:22:29.435 "trsvcid": "4420" 00:22:29.435 }, 00:22:29.435 "peer_address": { 00:22:29.435 "trtype": "TCP", 00:22:29.435 "adrfam": "IPv4", 00:22:29.435 "traddr": "10.0.0.1", 00:22:29.435 "trsvcid": "52614" 00:22:29.435 }, 00:22:29.435 "auth": { 00:22:29.435 "state": "completed", 00:22:29.435 "digest": "sha256", 00:22:29.435 "dhgroup": "ffdhe6144" 00:22:29.435 } 00:22:29.435 } 00:22:29.435 ]' 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.435 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.697 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:29.697 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:30.270 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.531 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.793 00:22:30.793 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.793 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.793 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.053 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.053 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.053 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.053 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.053 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.053 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.053 { 00:22:31.053 "cntlid": 37, 00:22:31.053 "qid": 0, 00:22:31.053 "state": "enabled", 00:22:31.053 "thread": "nvmf_tgt_poll_group_000", 00:22:31.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:31.053 "listen_address": { 00:22:31.053 "trtype": "TCP", 00:22:31.053 "adrfam": "IPv4", 00:22:31.053 "traddr": "10.0.0.2", 00:22:31.053 "trsvcid": "4420" 00:22:31.054 }, 00:22:31.054 "peer_address": { 00:22:31.054 "trtype": "TCP", 00:22:31.054 "adrfam": "IPv4", 00:22:31.054 "traddr": "10.0.0.1", 00:22:31.054 "trsvcid": "52656" 00:22:31.054 }, 00:22:31.054 "auth": { 00:22:31.054 "state": "completed", 00:22:31.054 "digest": "sha256", 00:22:31.054 "dhgroup": "ffdhe6144" 00:22:31.054 } 00:22:31.054 } 00:22:31.054 ]' 00:22:31.054 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.054 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:31.054 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.054 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:31.054 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.314 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.315 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.315 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.315 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:31.315 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.257 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.518 00:22:32.518 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.518 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.518 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.778 { 00:22:32.778 "cntlid": 39, 00:22:32.778 "qid": 0, 00:22:32.778 "state": "enabled", 00:22:32.778 "thread": "nvmf_tgt_poll_group_000", 00:22:32.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:32.778 "listen_address": { 00:22:32.778 "trtype": "TCP", 00:22:32.778 "adrfam": "IPv4", 00:22:32.778 "traddr": "10.0.0.2", 00:22:32.778 "trsvcid": "4420" 00:22:32.778 }, 00:22:32.778 "peer_address": { 00:22:32.778 "trtype": "TCP", 00:22:32.778 "adrfam": "IPv4", 00:22:32.778 "traddr": "10.0.0.1", 00:22:32.778 "trsvcid": "52674" 00:22:32.778 }, 00:22:32.778 "auth": { 00:22:32.778 "state": "completed", 00:22:32.778 "digest": "sha256", 00:22:32.778 "dhgroup": "ffdhe6144" 00:22:32.778 } 00:22:32.778 } 00:22:32.778 ]' 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.778 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.038 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.038 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.038 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.038 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:33.038 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:33.609 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.870 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.442 00:22:34.442 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.442 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.442 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.703 { 00:22:34.703 "cntlid": 41, 00:22:34.703 "qid": 0, 00:22:34.703 "state": "enabled", 00:22:34.703 "thread": "nvmf_tgt_poll_group_000", 00:22:34.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:34.703 "listen_address": { 00:22:34.703 "trtype": "TCP", 00:22:34.703 "adrfam": "IPv4", 00:22:34.703 "traddr": "10.0.0.2", 00:22:34.703 "trsvcid": "4420" 00:22:34.703 }, 00:22:34.703 "peer_address": { 00:22:34.703 "trtype": "TCP", 00:22:34.703 "adrfam": "IPv4", 00:22:34.703 "traddr": "10.0.0.1", 00:22:34.703 "trsvcid": "52706" 00:22:34.703 }, 00:22:34.703 "auth": { 00:22:34.703 "state": "completed", 00:22:34.703 "digest": "sha256", 00:22:34.703 "dhgroup": "ffdhe8192" 00:22:34.703 } 00:22:34.703 } 00:22:34.703 ]' 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.703 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.963 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:34.963 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:35.533 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.794 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.055 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.315 { 00:22:36.315 "cntlid": 43, 00:22:36.315 "qid": 0, 00:22:36.315 "state": "enabled", 00:22:36.315 "thread": "nvmf_tgt_poll_group_000", 00:22:36.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:36.315 "listen_address": { 00:22:36.315 "trtype": "TCP", 00:22:36.315 "adrfam": "IPv4", 00:22:36.315 "traddr": "10.0.0.2", 00:22:36.315 "trsvcid": "4420" 00:22:36.315 }, 00:22:36.315 "peer_address": { 00:22:36.315 "trtype": "TCP", 00:22:36.315 "adrfam": "IPv4", 00:22:36.315 "traddr": "10.0.0.1", 00:22:36.315 "trsvcid": "46768" 00:22:36.315 }, 00:22:36.315 "auth": { 00:22:36.315 "state": "completed", 00:22:36.315 "digest": "sha256", 00:22:36.315 "dhgroup": "ffdhe8192" 00:22:36.315 } 00:22:36.315 } 00:22:36.315 ]' 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.315 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.575 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.575 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.575 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.575 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.575 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:36.575 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:37.517 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.518 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.090 00:22:38.090 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.090 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.090 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.090 { 00:22:38.090 "cntlid": 45, 00:22:38.090 "qid": 0, 00:22:38.090 "state": "enabled", 00:22:38.090 "thread": "nvmf_tgt_poll_group_000", 00:22:38.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.090 "listen_address": { 00:22:38.090 "trtype": "TCP", 00:22:38.090 "adrfam": "IPv4", 00:22:38.090 "traddr": "10.0.0.2", 00:22:38.090 "trsvcid": "4420" 00:22:38.090 }, 00:22:38.090 "peer_address": { 00:22:38.090 "trtype": "TCP", 00:22:38.090 "adrfam": "IPv4", 00:22:38.090 "traddr": "10.0.0.1", 00:22:38.090 "trsvcid": "46792" 00:22:38.090 }, 00:22:38.090 "auth": { 00:22:38.090 "state": "completed", 00:22:38.090 "digest": "sha256", 00:22:38.090 "dhgroup": "ffdhe8192" 00:22:38.090 } 00:22:38.090 } 00:22:38.090 ]' 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:38.090 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:38.349 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.289 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.860 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.860 { 00:22:39.860 "cntlid": 47, 00:22:39.860 "qid": 0, 00:22:39.860 "state": "enabled", 00:22:39.860 "thread": "nvmf_tgt_poll_group_000", 00:22:39.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:39.860 "listen_address": { 00:22:39.860 "trtype": "TCP", 00:22:39.860 "adrfam": "IPv4", 00:22:39.860 "traddr": "10.0.0.2", 00:22:39.860 "trsvcid": "4420" 00:22:39.860 }, 00:22:39.860 "peer_address": { 00:22:39.860 "trtype": "TCP", 00:22:39.860 "adrfam": "IPv4", 00:22:39.860 "traddr": "10.0.0.1", 00:22:39.860 "trsvcid": "46816" 00:22:39.860 }, 00:22:39.860 "auth": { 00:22:39.860 "state": "completed", 00:22:39.860 "digest": "sha256", 00:22:39.860 "dhgroup": "ffdhe8192" 00:22:39.860 } 00:22:39.860 } 00:22:39.860 ]' 00:22:39.860 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.120 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.381 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:40.381 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:40.951 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.212 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.213 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.474 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.474 { 00:22:41.474 "cntlid": 49, 00:22:41.474 "qid": 0, 00:22:41.474 "state": "enabled", 00:22:41.474 "thread": "nvmf_tgt_poll_group_000", 00:22:41.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:41.474 "listen_address": { 00:22:41.474 "trtype": "TCP", 00:22:41.474 "adrfam": "IPv4", 00:22:41.474 "traddr": "10.0.0.2", 00:22:41.474 "trsvcid": "4420" 00:22:41.474 }, 00:22:41.474 "peer_address": { 00:22:41.474 "trtype": "TCP", 00:22:41.474 "adrfam": "IPv4", 00:22:41.474 "traddr": "10.0.0.1", 00:22:41.474 "trsvcid": "46830" 00:22:41.474 }, 00:22:41.474 "auth": { 00:22:41.474 "state": "completed", 00:22:41.474 "digest": "sha384", 00:22:41.474 "dhgroup": "null" 00:22:41.474 } 00:22:41.474 } 00:22:41.474 ]' 00:22:41.474 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.735 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.996 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:41.996 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:42.566 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:42.567 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.827 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.088 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.088 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.349 { 00:22:43.349 "cntlid": 51, 00:22:43.349 "qid": 0, 00:22:43.349 "state": "enabled", 00:22:43.349 "thread": "nvmf_tgt_poll_group_000", 00:22:43.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.349 "listen_address": { 00:22:43.349 "trtype": "TCP", 00:22:43.349 "adrfam": "IPv4", 00:22:43.349 "traddr": "10.0.0.2", 00:22:43.349 "trsvcid": "4420" 00:22:43.349 }, 00:22:43.349 "peer_address": { 00:22:43.349 "trtype": "TCP", 00:22:43.349 "adrfam": "IPv4", 00:22:43.349 "traddr": "10.0.0.1", 00:22:43.349 "trsvcid": "46860" 00:22:43.349 }, 00:22:43.349 "auth": { 00:22:43.349 "state": "completed", 00:22:43.349 "digest": "sha384", 00:22:43.349 "dhgroup": "null" 00:22:43.349 } 00:22:43.349 } 00:22:43.349 ]' 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.349 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.621 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:43.621 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.292 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.561 00:22:44.561 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.561 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.561 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.822 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.822 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.822 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.823 { 00:22:44.823 "cntlid": 53, 00:22:44.823 "qid": 0, 00:22:44.823 "state": "enabled", 00:22:44.823 "thread": "nvmf_tgt_poll_group_000", 00:22:44.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:44.823 "listen_address": { 00:22:44.823 "trtype": "TCP", 00:22:44.823 "adrfam": "IPv4", 00:22:44.823 "traddr": "10.0.0.2", 00:22:44.823 "trsvcid": "4420" 00:22:44.823 }, 00:22:44.823 "peer_address": { 00:22:44.823 "trtype": "TCP", 00:22:44.823 "adrfam": "IPv4", 00:22:44.823 "traddr": "10.0.0.1", 00:22:44.823 "trsvcid": "46884" 00:22:44.823 }, 00:22:44.823 "auth": { 00:22:44.823 "state": "completed", 00:22:44.823 "digest": "sha384", 00:22:44.823 "dhgroup": "null" 00:22:44.823 } 00:22:44.823 } 00:22:44.823 ]' 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:44.823 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.099 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.099 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.099 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.099 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:45.099 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:45.673 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.934 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.195 00:22:46.195 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.195 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.195 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.457 { 00:22:46.457 "cntlid": 55, 00:22:46.457 "qid": 0, 00:22:46.457 "state": "enabled", 00:22:46.457 "thread": "nvmf_tgt_poll_group_000", 00:22:46.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:46.457 "listen_address": { 00:22:46.457 "trtype": "TCP", 00:22:46.457 "adrfam": "IPv4", 00:22:46.457 "traddr": "10.0.0.2", 00:22:46.457 "trsvcid": "4420" 00:22:46.457 }, 00:22:46.457 "peer_address": { 00:22:46.457 "trtype": "TCP", 00:22:46.457 "adrfam": "IPv4", 00:22:46.457 "traddr": "10.0.0.1", 00:22:46.457 "trsvcid": "55366" 00:22:46.457 }, 00:22:46.457 "auth": { 00:22:46.457 "state": "completed", 00:22:46.457 "digest": "sha384", 00:22:46.457 "dhgroup": "null" 00:22:46.457 } 00:22:46.457 } 00:22:46.457 ]' 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.457 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.718 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:46.718 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:47.289 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.550 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.810 00:22:47.810 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.810 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.810 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.072 { 00:22:48.072 "cntlid": 57, 00:22:48.072 "qid": 0, 00:22:48.072 "state": "enabled", 00:22:48.072 "thread": "nvmf_tgt_poll_group_000", 00:22:48.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:48.072 "listen_address": { 00:22:48.072 "trtype": "TCP", 00:22:48.072 "adrfam": "IPv4", 00:22:48.072 "traddr": "10.0.0.2", 00:22:48.072 "trsvcid": "4420" 00:22:48.072 }, 00:22:48.072 "peer_address": { 00:22:48.072 "trtype": "TCP", 00:22:48.072 "adrfam": "IPv4", 00:22:48.072 "traddr": "10.0.0.1", 00:22:48.072 "trsvcid": "55384" 00:22:48.072 }, 00:22:48.072 "auth": { 00:22:48.072 "state": "completed", 00:22:48.072 "digest": "sha384", 00:22:48.072 "dhgroup": "ffdhe2048" 00:22:48.072 } 00:22:48.072 } 00:22:48.072 ]' 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.072 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.333 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:48.333 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:48.904 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:48.904 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.164 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.425 00:22:49.425 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.425 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.426 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.687 { 00:22:49.687 "cntlid": 59, 00:22:49.687 "qid": 0, 00:22:49.687 "state": "enabled", 00:22:49.687 "thread": "nvmf_tgt_poll_group_000", 00:22:49.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:49.687 "listen_address": { 00:22:49.687 "trtype": "TCP", 00:22:49.687 "adrfam": "IPv4", 00:22:49.687 "traddr": "10.0.0.2", 00:22:49.687 "trsvcid": "4420" 00:22:49.687 }, 00:22:49.687 "peer_address": { 00:22:49.687 "trtype": "TCP", 00:22:49.687 "adrfam": "IPv4", 00:22:49.687 "traddr": "10.0.0.1", 00:22:49.687 "trsvcid": "55400" 00:22:49.687 }, 00:22:49.687 "auth": { 00:22:49.687 "state": "completed", 00:22:49.687 "digest": "sha384", 00:22:49.687 "dhgroup": "ffdhe2048" 00:22:49.687 } 00:22:49.687 } 00:22:49.687 ]' 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.687 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.948 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:49.948 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:50.519 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.780 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.040 00:22:51.040 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.040 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.041 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.302 { 00:22:51.302 "cntlid": 61, 00:22:51.302 "qid": 0, 00:22:51.302 "state": "enabled", 00:22:51.302 "thread": "nvmf_tgt_poll_group_000", 00:22:51.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:51.302 "listen_address": { 00:22:51.302 "trtype": "TCP", 00:22:51.302 "adrfam": "IPv4", 00:22:51.302 "traddr": "10.0.0.2", 00:22:51.302 "trsvcid": "4420" 00:22:51.302 }, 00:22:51.302 "peer_address": { 00:22:51.302 "trtype": "TCP", 00:22:51.302 "adrfam": "IPv4", 00:22:51.302 "traddr": "10.0.0.1", 00:22:51.302 "trsvcid": "55432" 00:22:51.302 }, 00:22:51.302 "auth": { 00:22:51.302 "state": "completed", 00:22:51.302 "digest": "sha384", 00:22:51.302 "dhgroup": "ffdhe2048" 00:22:51.302 } 00:22:51.302 } 00:22:51.302 ]' 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:51.302 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.564 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.564 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.564 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.564 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:51.564 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:52.506 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.507 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.767 00:22:52.767 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.767 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.767 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.028 { 00:22:53.028 "cntlid": 63, 00:22:53.028 "qid": 0, 00:22:53.028 "state": "enabled", 00:22:53.028 "thread": "nvmf_tgt_poll_group_000", 00:22:53.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:53.028 "listen_address": { 00:22:53.028 "trtype": "TCP", 00:22:53.028 "adrfam": "IPv4", 00:22:53.028 "traddr": "10.0.0.2", 00:22:53.028 "trsvcid": "4420" 00:22:53.028 }, 00:22:53.028 "peer_address": { 00:22:53.028 "trtype": "TCP", 00:22:53.028 "adrfam": "IPv4", 00:22:53.028 "traddr": "10.0.0.1", 00:22:53.028 "trsvcid": "55456" 00:22:53.028 }, 00:22:53.028 "auth": { 00:22:53.028 "state": "completed", 00:22:53.028 "digest": "sha384", 00:22:53.028 "dhgroup": "ffdhe2048" 00:22:53.028 } 00:22:53.028 } 00:22:53.028 ]' 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.028 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.029 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:53.029 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.029 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.029 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.029 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.289 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:53.289 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:53.860 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.122 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.382 00:22:54.382 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.382 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.382 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.642 { 00:22:54.642 "cntlid": 65, 00:22:54.642 "qid": 0, 00:22:54.642 "state": "enabled", 00:22:54.642 "thread": "nvmf_tgt_poll_group_000", 00:22:54.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:54.642 "listen_address": { 00:22:54.642 "trtype": "TCP", 00:22:54.642 "adrfam": "IPv4", 00:22:54.642 "traddr": "10.0.0.2", 00:22:54.642 "trsvcid": "4420" 00:22:54.642 }, 00:22:54.642 "peer_address": { 00:22:54.642 "trtype": "TCP", 00:22:54.642 "adrfam": "IPv4", 00:22:54.642 "traddr": "10.0.0.1", 00:22:54.642 "trsvcid": "55488" 00:22:54.642 }, 00:22:54.642 "auth": { 00:22:54.642 "state": "completed", 00:22:54.642 "digest": "sha384", 00:22:54.642 "dhgroup": "ffdhe3072" 00:22:54.642 } 00:22:54.642 } 00:22:54.642 ]' 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.642 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.903 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:54.903 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:55.486 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.746 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.747 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.747 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.747 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.747 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.747 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.747 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.008 00:22:56.008 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.008 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.008 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.268 { 00:22:56.268 "cntlid": 67, 00:22:56.268 "qid": 0, 00:22:56.268 "state": "enabled", 00:22:56.268 "thread": "nvmf_tgt_poll_group_000", 00:22:56.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:56.268 "listen_address": { 00:22:56.268 "trtype": "TCP", 00:22:56.268 "adrfam": "IPv4", 00:22:56.268 "traddr": "10.0.0.2", 00:22:56.268 "trsvcid": "4420" 00:22:56.268 }, 00:22:56.268 "peer_address": { 00:22:56.268 "trtype": "TCP", 00:22:56.268 "adrfam": "IPv4", 00:22:56.268 "traddr": "10.0.0.1", 00:22:56.268 "trsvcid": "45698" 00:22:56.268 }, 00:22:56.268 "auth": { 00:22:56.268 "state": "completed", 00:22:56.268 "digest": "sha384", 00:22:56.268 "dhgroup": "ffdhe3072" 00:22:56.268 } 00:22:56.268 } 00:22:56.268 ]' 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.268 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.529 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:56.529 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:57.100 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:57.360 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:57.360 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.361 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.621 00:22:57.621 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.621 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.621 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.881 { 00:22:57.881 "cntlid": 69, 00:22:57.881 "qid": 0, 00:22:57.881 "state": "enabled", 00:22:57.881 "thread": "nvmf_tgt_poll_group_000", 00:22:57.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.881 "listen_address": { 00:22:57.881 "trtype": "TCP", 00:22:57.881 "adrfam": "IPv4", 00:22:57.881 "traddr": "10.0.0.2", 00:22:57.881 "trsvcid": "4420" 00:22:57.881 }, 00:22:57.881 "peer_address": { 00:22:57.881 "trtype": "TCP", 00:22:57.881 "adrfam": "IPv4", 00:22:57.881 "traddr": "10.0.0.1", 00:22:57.881 "trsvcid": "45722" 00:22:57.881 }, 00:22:57.881 "auth": { 00:22:57.881 "state": "completed", 00:22:57.881 "digest": "sha384", 00:22:57.881 "dhgroup": "ffdhe3072" 00:22:57.881 } 00:22:57.881 } 00:22:57.881 ]' 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:57.881 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.142 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.142 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.142 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.142 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:58.142 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:22:59.083 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.083 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.083 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.083 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.083 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.083 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.084 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:59.084 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.084 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.343 00:22:59.343 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.343 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.343 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.601 { 00:22:59.601 "cntlid": 71, 00:22:59.601 "qid": 0, 00:22:59.601 "state": "enabled", 00:22:59.601 "thread": "nvmf_tgt_poll_group_000", 00:22:59.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:59.601 "listen_address": { 00:22:59.601 "trtype": "TCP", 00:22:59.601 "adrfam": "IPv4", 00:22:59.601 "traddr": "10.0.0.2", 00:22:59.601 "trsvcid": "4420" 00:22:59.601 }, 00:22:59.601 "peer_address": { 00:22:59.601 "trtype": "TCP", 00:22:59.601 "adrfam": "IPv4", 00:22:59.601 "traddr": "10.0.0.1", 00:22:59.601 "trsvcid": "45748" 00:22:59.601 }, 00:22:59.601 "auth": { 00:22:59.601 "state": "completed", 00:22:59.601 "digest": "sha384", 00:22:59.601 "dhgroup": "ffdhe3072" 00:22:59.601 } 00:22:59.601 } 00:22:59.601 ]' 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.601 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.860 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:22:59.860 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:00.427 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.686 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.945 00:23:00.945 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.945 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.945 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.204 { 00:23:01.204 "cntlid": 73, 00:23:01.204 "qid": 0, 00:23:01.204 "state": "enabled", 00:23:01.204 "thread": "nvmf_tgt_poll_group_000", 00:23:01.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.204 "listen_address": { 00:23:01.204 "trtype": "TCP", 00:23:01.204 "adrfam": "IPv4", 00:23:01.204 "traddr": "10.0.0.2", 00:23:01.204 "trsvcid": "4420" 00:23:01.204 }, 00:23:01.204 "peer_address": { 00:23:01.204 "trtype": "TCP", 00:23:01.204 "adrfam": "IPv4", 00:23:01.204 "traddr": "10.0.0.1", 00:23:01.204 "trsvcid": "45762" 00:23:01.204 }, 00:23:01.204 "auth": { 00:23:01.204 "state": "completed", 00:23:01.204 "digest": "sha384", 00:23:01.204 "dhgroup": "ffdhe4096" 00:23:01.204 } 00:23:01.204 } 00:23:01.204 ]' 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.204 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.463 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:01.463 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.032 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.291 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.550 00:23:02.550 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.550 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.551 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.810 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.810 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.810 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.810 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.810 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.810 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.810 { 00:23:02.810 "cntlid": 75, 00:23:02.810 "qid": 0, 00:23:02.810 "state": "enabled", 00:23:02.810 "thread": "nvmf_tgt_poll_group_000", 00:23:02.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:02.810 "listen_address": { 00:23:02.810 "trtype": "TCP", 00:23:02.810 "adrfam": "IPv4", 00:23:02.810 "traddr": "10.0.0.2", 00:23:02.810 "trsvcid": "4420" 00:23:02.810 }, 00:23:02.810 "peer_address": { 00:23:02.810 "trtype": "TCP", 00:23:02.810 "adrfam": "IPv4", 00:23:02.810 "traddr": "10.0.0.1", 00:23:02.810 "trsvcid": "45790" 00:23:02.810 }, 00:23:02.810 "auth": { 00:23:02.810 "state": "completed", 00:23:02.810 "digest": "sha384", 00:23:02.810 "dhgroup": "ffdhe4096" 00:23:02.811 } 00:23:02.811 } 00:23:02.811 ]' 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.811 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.071 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:03.071 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:03.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.902 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.163 00:23:04.163 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.163 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.163 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.424 { 00:23:04.424 "cntlid": 77, 00:23:04.424 "qid": 0, 00:23:04.424 "state": "enabled", 00:23:04.424 "thread": "nvmf_tgt_poll_group_000", 00:23:04.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:04.424 "listen_address": { 00:23:04.424 "trtype": "TCP", 00:23:04.424 "adrfam": "IPv4", 00:23:04.424 "traddr": "10.0.0.2", 00:23:04.424 "trsvcid": "4420" 00:23:04.424 }, 00:23:04.424 "peer_address": { 00:23:04.424 "trtype": "TCP", 00:23:04.424 "adrfam": "IPv4", 00:23:04.424 "traddr": "10.0.0.1", 00:23:04.424 "trsvcid": "45816" 00:23:04.424 }, 00:23:04.424 "auth": { 00:23:04.424 "state": "completed", 00:23:04.424 "digest": "sha384", 00:23:04.424 "dhgroup": "ffdhe4096" 00:23:04.424 } 00:23:04.424 } 00:23:04.424 ]' 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.424 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.684 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:04.684 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.886 00:23:05.886 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.886 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.886 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.146 { 00:23:06.146 "cntlid": 79, 00:23:06.146 "qid": 0, 00:23:06.146 "state": "enabled", 00:23:06.146 "thread": "nvmf_tgt_poll_group_000", 00:23:06.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:06.146 "listen_address": { 00:23:06.146 "trtype": "TCP", 00:23:06.146 "adrfam": "IPv4", 00:23:06.146 "traddr": "10.0.0.2", 00:23:06.146 "trsvcid": "4420" 00:23:06.146 }, 00:23:06.146 "peer_address": { 00:23:06.146 "trtype": "TCP", 00:23:06.146 "adrfam": "IPv4", 00:23:06.146 "traddr": "10.0.0.1", 00:23:06.146 "trsvcid": "42440" 00:23:06.146 }, 00:23:06.146 "auth": { 00:23:06.146 "state": "completed", 00:23:06.146 "digest": "sha384", 00:23:06.146 "dhgroup": "ffdhe4096" 00:23:06.146 } 00:23:06.146 } 00:23:06.146 ]' 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.146 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.407 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:06.407 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:06.977 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.977 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.977 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.977 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.977 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.977 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.977 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.978 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:06.978 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.238 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.239 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.239 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.239 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.499 00:23:07.499 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.499 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.500 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.761 { 00:23:07.761 "cntlid": 81, 00:23:07.761 "qid": 0, 00:23:07.761 "state": "enabled", 00:23:07.761 "thread": "nvmf_tgt_poll_group_000", 00:23:07.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:07.761 "listen_address": { 00:23:07.761 "trtype": "TCP", 00:23:07.761 "adrfam": "IPv4", 00:23:07.761 "traddr": "10.0.0.2", 00:23:07.761 "trsvcid": "4420" 00:23:07.761 }, 00:23:07.761 "peer_address": { 00:23:07.761 "trtype": "TCP", 00:23:07.761 "adrfam": "IPv4", 00:23:07.761 "traddr": "10.0.0.1", 00:23:07.761 "trsvcid": "42458" 00:23:07.761 }, 00:23:07.761 "auth": { 00:23:07.761 "state": "completed", 00:23:07.761 "digest": "sha384", 00:23:07.761 "dhgroup": "ffdhe6144" 00:23:07.761 } 00:23:07.761 } 00:23:07.761 ]' 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:07.761 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.022 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.022 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.022 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.022 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:08.022 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:08.592 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.853 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.113 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.374 { 00:23:09.374 "cntlid": 83, 00:23:09.374 "qid": 0, 00:23:09.374 "state": "enabled", 00:23:09.374 "thread": "nvmf_tgt_poll_group_000", 00:23:09.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:09.374 "listen_address": { 00:23:09.374 "trtype": "TCP", 00:23:09.374 "adrfam": "IPv4", 00:23:09.374 "traddr": "10.0.0.2", 00:23:09.374 "trsvcid": "4420" 00:23:09.374 }, 00:23:09.374 "peer_address": { 00:23:09.374 "trtype": "TCP", 00:23:09.374 "adrfam": "IPv4", 00:23:09.374 "traddr": "10.0.0.1", 00:23:09.374 "trsvcid": "42478" 00:23:09.374 }, 00:23:09.374 "auth": { 00:23:09.374 "state": "completed", 00:23:09.374 "digest": "sha384", 00:23:09.374 "dhgroup": "ffdhe6144" 00:23:09.374 } 00:23:09.374 } 00:23:09.374 ]' 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.374 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.635 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.635 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:09.635 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.635 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.635 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.635 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.895 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:09.895 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:10.466 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.728 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.989 00:23:10.989 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.989 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.989 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.249 { 00:23:11.249 "cntlid": 85, 00:23:11.249 "qid": 0, 00:23:11.249 "state": "enabled", 00:23:11.249 "thread": "nvmf_tgt_poll_group_000", 00:23:11.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:11.249 "listen_address": { 00:23:11.249 "trtype": "TCP", 00:23:11.249 "adrfam": "IPv4", 00:23:11.249 "traddr": "10.0.0.2", 00:23:11.249 "trsvcid": "4420" 00:23:11.249 }, 00:23:11.249 "peer_address": { 00:23:11.249 "trtype": "TCP", 00:23:11.249 "adrfam": "IPv4", 00:23:11.249 "traddr": "10.0.0.1", 00:23:11.249 "trsvcid": "42516" 00:23:11.249 }, 00:23:11.249 "auth": { 00:23:11.249 "state": "completed", 00:23:11.249 "digest": "sha384", 00:23:11.249 "dhgroup": "ffdhe6144" 00:23:11.249 } 00:23:11.249 } 00:23:11.249 ]' 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.249 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.250 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.250 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.510 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:11.510 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:12.083 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.344 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.345 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.605 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.866 { 00:23:12.866 "cntlid": 87, 00:23:12.866 "qid": 0, 00:23:12.866 "state": "enabled", 00:23:12.866 "thread": "nvmf_tgt_poll_group_000", 00:23:12.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:12.866 "listen_address": { 00:23:12.866 "trtype": "TCP", 00:23:12.866 "adrfam": "IPv4", 00:23:12.866 "traddr": "10.0.0.2", 00:23:12.866 "trsvcid": "4420" 00:23:12.866 }, 00:23:12.866 "peer_address": { 00:23:12.866 "trtype": "TCP", 00:23:12.866 "adrfam": "IPv4", 00:23:12.866 "traddr": "10.0.0.1", 00:23:12.866 "trsvcid": "42556" 00:23:12.866 }, 00:23:12.866 "auth": { 00:23:12.866 "state": "completed", 00:23:12.866 "digest": "sha384", 00:23:12.866 "dhgroup": "ffdhe6144" 00:23:12.866 } 00:23:12.866 } 00:23:12.866 ]' 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:12.866 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.127 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:13.127 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.127 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.127 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.127 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.388 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:13.388 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:13.961 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.961 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.222 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.222 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.222 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.222 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.482 00:23:14.482 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.482 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.482 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.769 { 00:23:14.769 "cntlid": 89, 00:23:14.769 "qid": 0, 00:23:14.769 "state": "enabled", 00:23:14.769 "thread": "nvmf_tgt_poll_group_000", 00:23:14.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:14.769 "listen_address": { 00:23:14.769 "trtype": "TCP", 00:23:14.769 "adrfam": "IPv4", 00:23:14.769 "traddr": "10.0.0.2", 00:23:14.769 "trsvcid": "4420" 00:23:14.769 }, 00:23:14.769 "peer_address": { 00:23:14.769 "trtype": "TCP", 00:23:14.769 "adrfam": "IPv4", 00:23:14.769 "traddr": "10.0.0.1", 00:23:14.769 "trsvcid": "42574" 00:23:14.769 }, 00:23:14.769 "auth": { 00:23:14.769 "state": "completed", 00:23:14.769 "digest": "sha384", 00:23:14.769 "dhgroup": "ffdhe8192" 00:23:14.769 } 00:23:14.769 } 00:23:14.769 ]' 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.769 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.029 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:15.029 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:15.600 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.861 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.434 00:23:16.434 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.434 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.434 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.695 { 00:23:16.695 "cntlid": 91, 00:23:16.695 "qid": 0, 00:23:16.695 "state": "enabled", 00:23:16.695 "thread": "nvmf_tgt_poll_group_000", 00:23:16.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:16.695 "listen_address": { 00:23:16.695 "trtype": "TCP", 00:23:16.695 "adrfam": "IPv4", 00:23:16.695 "traddr": "10.0.0.2", 00:23:16.695 "trsvcid": "4420" 00:23:16.695 }, 00:23:16.695 "peer_address": { 00:23:16.695 "trtype": "TCP", 00:23:16.695 "adrfam": "IPv4", 00:23:16.695 "traddr": "10.0.0.1", 00:23:16.695 "trsvcid": "56240" 00:23:16.695 }, 00:23:16.695 "auth": { 00:23:16.695 "state": "completed", 00:23:16.695 "digest": "sha384", 00:23:16.695 "dhgroup": "ffdhe8192" 00:23:16.695 } 00:23:16.695 } 00:23:16.695 ]' 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.695 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.956 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:16.956 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:17.526 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:17.785 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:17.785 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.785 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.786 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.355 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.355 { 00:23:18.355 "cntlid": 93, 00:23:18.355 "qid": 0, 00:23:18.355 "state": "enabled", 00:23:18.355 "thread": "nvmf_tgt_poll_group_000", 00:23:18.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:18.355 "listen_address": { 00:23:18.355 "trtype": "TCP", 00:23:18.355 "adrfam": "IPv4", 00:23:18.355 "traddr": "10.0.0.2", 00:23:18.355 "trsvcid": "4420" 00:23:18.355 }, 00:23:18.355 "peer_address": { 00:23:18.355 "trtype": "TCP", 00:23:18.355 "adrfam": "IPv4", 00:23:18.355 "traddr": "10.0.0.1", 00:23:18.355 "trsvcid": "56256" 00:23:18.355 }, 00:23:18.355 "auth": { 00:23:18.355 "state": "completed", 00:23:18.355 "digest": "sha384", 00:23:18.355 "dhgroup": "ffdhe8192" 00:23:18.355 } 00:23:18.355 } 00:23:18.355 ]' 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.355 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:18.616 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.560 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.132 00:23:20.132 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.132 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.132 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.393 { 00:23:20.393 "cntlid": 95, 00:23:20.393 "qid": 0, 00:23:20.393 "state": "enabled", 00:23:20.393 "thread": "nvmf_tgt_poll_group_000", 00:23:20.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:20.393 "listen_address": { 00:23:20.393 "trtype": "TCP", 00:23:20.393 "adrfam": "IPv4", 00:23:20.393 "traddr": "10.0.0.2", 00:23:20.393 "trsvcid": "4420" 00:23:20.393 }, 00:23:20.393 "peer_address": { 00:23:20.393 "trtype": "TCP", 00:23:20.393 "adrfam": "IPv4", 00:23:20.393 "traddr": "10.0.0.1", 00:23:20.393 "trsvcid": "56272" 00:23:20.393 }, 00:23:20.393 "auth": { 00:23:20.393 "state": "completed", 00:23:20.393 "digest": "sha384", 00:23:20.393 "dhgroup": "ffdhe8192" 00:23:20.393 } 00:23:20.393 } 00:23:20.393 ]' 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.393 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:20.394 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.394 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.394 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.394 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.654 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:20.654 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:21.225 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:21.486 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.487 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.747 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.747 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.033 { 00:23:22.033 "cntlid": 97, 00:23:22.033 "qid": 0, 00:23:22.033 "state": "enabled", 00:23:22.033 "thread": "nvmf_tgt_poll_group_000", 00:23:22.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:22.033 "listen_address": { 00:23:22.033 "trtype": "TCP", 00:23:22.033 "adrfam": "IPv4", 00:23:22.033 "traddr": "10.0.0.2", 00:23:22.033 "trsvcid": "4420" 00:23:22.033 }, 00:23:22.033 "peer_address": { 00:23:22.033 "trtype": "TCP", 00:23:22.033 "adrfam": "IPv4", 00:23:22.033 "traddr": "10.0.0.1", 00:23:22.033 "trsvcid": "56318" 00:23:22.033 }, 00:23:22.033 "auth": { 00:23:22.033 "state": "completed", 00:23:22.033 "digest": "sha512", 00:23:22.033 "dhgroup": "null" 00:23:22.033 } 00:23:22.033 } 00:23:22.033 ]' 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:22.033 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.033 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.033 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.033 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.366 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:22.366 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:22.940 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.940 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.941 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.202 00:23:23.202 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.202 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.202 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:23.463 { 00:23:23.463 "cntlid": 99, 00:23:23.463 "qid": 0, 00:23:23.463 "state": "enabled", 00:23:23.463 "thread": "nvmf_tgt_poll_group_000", 00:23:23.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:23.463 "listen_address": { 00:23:23.463 "trtype": "TCP", 00:23:23.463 "adrfam": "IPv4", 00:23:23.463 "traddr": "10.0.0.2", 00:23:23.463 "trsvcid": "4420" 00:23:23.463 }, 00:23:23.463 "peer_address": { 00:23:23.463 "trtype": "TCP", 00:23:23.463 "adrfam": "IPv4", 00:23:23.463 "traddr": "10.0.0.1", 00:23:23.463 "trsvcid": "56344" 00:23:23.463 }, 00:23:23.463 "auth": { 00:23:23.463 "state": "completed", 00:23:23.463 "digest": "sha512", 00:23:23.463 "dhgroup": "null" 00:23:23.463 } 00:23:23.463 } 00:23:23.463 ]' 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:23.463 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:23.724 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.724 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.724 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.724 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:23.724 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.665 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.925 00:23:24.925 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.926 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.926 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.185 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.186 { 00:23:25.186 "cntlid": 101, 00:23:25.186 "qid": 0, 00:23:25.186 "state": "enabled", 00:23:25.186 "thread": "nvmf_tgt_poll_group_000", 00:23:25.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:25.186 "listen_address": { 00:23:25.186 "trtype": "TCP", 00:23:25.186 "adrfam": "IPv4", 00:23:25.186 "traddr": "10.0.0.2", 00:23:25.186 "trsvcid": "4420" 00:23:25.186 }, 00:23:25.186 "peer_address": { 00:23:25.186 "trtype": "TCP", 00:23:25.186 "adrfam": "IPv4", 00:23:25.186 "traddr": "10.0.0.1", 00:23:25.186 "trsvcid": "42562" 00:23:25.186 }, 00:23:25.186 "auth": { 00:23:25.186 "state": "completed", 00:23:25.186 "digest": "sha512", 00:23:25.186 "dhgroup": "null" 00:23:25.186 } 00:23:25.186 } 00:23:25.186 ]' 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.186 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.446 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:25.446 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:26.019 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:26.281 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:26.543 00:23:26.543 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.543 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.543 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.804 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.804 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.805 { 00:23:26.805 "cntlid": 103, 00:23:26.805 "qid": 0, 00:23:26.805 "state": "enabled", 00:23:26.805 "thread": "nvmf_tgt_poll_group_000", 00:23:26.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:26.805 "listen_address": { 00:23:26.805 "trtype": "TCP", 00:23:26.805 "adrfam": "IPv4", 00:23:26.805 "traddr": "10.0.0.2", 00:23:26.805 "trsvcid": "4420" 00:23:26.805 }, 00:23:26.805 "peer_address": { 00:23:26.805 "trtype": "TCP", 00:23:26.805 "adrfam": "IPv4", 00:23:26.805 "traddr": "10.0.0.1", 00:23:26.805 "trsvcid": "42602" 00:23:26.805 }, 00:23:26.805 "auth": { 00:23:26.805 "state": "completed", 00:23:26.805 "digest": "sha512", 00:23:26.805 "dhgroup": "null" 00:23:26.805 } 00:23:26.805 } 00:23:26.805 ]' 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.805 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.066 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:27.066 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.637 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.898 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.159 00:23:28.159 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.159 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.159 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.419 { 00:23:28.419 "cntlid": 105, 00:23:28.419 "qid": 0, 00:23:28.419 "state": "enabled", 00:23:28.419 "thread": "nvmf_tgt_poll_group_000", 00:23:28.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:28.419 "listen_address": { 00:23:28.419 "trtype": "TCP", 00:23:28.419 "adrfam": "IPv4", 00:23:28.419 "traddr": "10.0.0.2", 00:23:28.419 "trsvcid": "4420" 00:23:28.419 }, 00:23:28.419 "peer_address": { 00:23:28.419 "trtype": "TCP", 00:23:28.419 "adrfam": "IPv4", 00:23:28.419 "traddr": "10.0.0.1", 00:23:28.419 "trsvcid": "42624" 00:23:28.419 }, 00:23:28.419 "auth": { 00:23:28.419 "state": "completed", 00:23:28.419 "digest": "sha512", 00:23:28.419 "dhgroup": "ffdhe2048" 00:23:28.419 } 00:23:28.419 } 00:23:28.419 ]' 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.419 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.680 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:28.680 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:29.249 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.508 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.769 00:23:29.769 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.769 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.769 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:30.029 { 00:23:30.029 "cntlid": 107, 00:23:30.029 "qid": 0, 00:23:30.029 "state": "enabled", 00:23:30.029 "thread": "nvmf_tgt_poll_group_000", 00:23:30.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:30.029 "listen_address": { 00:23:30.029 "trtype": "TCP", 00:23:30.029 "adrfam": "IPv4", 00:23:30.029 "traddr": "10.0.0.2", 00:23:30.029 "trsvcid": "4420" 00:23:30.029 }, 00:23:30.029 "peer_address": { 00:23:30.029 "trtype": "TCP", 00:23:30.029 "adrfam": "IPv4", 00:23:30.029 "traddr": "10.0.0.1", 00:23:30.029 "trsvcid": "42652" 00:23:30.029 }, 00:23:30.029 "auth": { 00:23:30.029 "state": "completed", 00:23:30.029 "digest": "sha512", 00:23:30.029 "dhgroup": "ffdhe2048" 00:23:30.029 } 00:23:30.029 } 00:23:30.029 ]' 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.029 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.029 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:30.029 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.029 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.029 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.029 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.290 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:30.290 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:30.862 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:30.863 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.123 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.383 00:23:31.383 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.383 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.383 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.383 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.645 { 00:23:31.645 "cntlid": 109, 00:23:31.645 "qid": 0, 00:23:31.645 "state": "enabled", 00:23:31.645 "thread": "nvmf_tgt_poll_group_000", 00:23:31.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:31.645 "listen_address": { 00:23:31.645 "trtype": "TCP", 00:23:31.645 "adrfam": "IPv4", 00:23:31.645 "traddr": "10.0.0.2", 00:23:31.645 "trsvcid": "4420" 00:23:31.645 }, 00:23:31.645 "peer_address": { 00:23:31.645 "trtype": "TCP", 00:23:31.645 "adrfam": "IPv4", 00:23:31.645 "traddr": "10.0.0.1", 00:23:31.645 "trsvcid": "42682" 00:23:31.645 }, 00:23:31.645 "auth": { 00:23:31.645 "state": "completed", 00:23:31.645 "digest": "sha512", 00:23:31.645 "dhgroup": "ffdhe2048" 00:23:31.645 } 00:23:31.645 } 00:23:31.645 ]' 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.645 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.905 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:31.905 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:32.474 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:32.734 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:32.994 00:23:32.994 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.994 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.994 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.255 { 00:23:33.255 "cntlid": 111, 00:23:33.255 "qid": 0, 00:23:33.255 "state": "enabled", 00:23:33.255 "thread": "nvmf_tgt_poll_group_000", 00:23:33.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:33.255 "listen_address": { 00:23:33.255 "trtype": "TCP", 00:23:33.255 "adrfam": "IPv4", 00:23:33.255 "traddr": "10.0.0.2", 00:23:33.255 "trsvcid": "4420" 00:23:33.255 }, 00:23:33.255 "peer_address": { 00:23:33.255 "trtype": "TCP", 00:23:33.255 "adrfam": "IPv4", 00:23:33.255 "traddr": "10.0.0.1", 00:23:33.255 "trsvcid": "42716" 00:23:33.255 }, 00:23:33.255 "auth": { 00:23:33.255 "state": "completed", 00:23:33.255 "digest": "sha512", 00:23:33.255 "dhgroup": "ffdhe2048" 00:23:33.255 } 00:23:33.255 } 00:23:33.255 ]' 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.255 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.515 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:33.515 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:34.086 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.348 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.608 00:23:34.608 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.608 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.608 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.870 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.870 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.870 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.870 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.870 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.870 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.870 { 00:23:34.870 "cntlid": 113, 00:23:34.870 "qid": 0, 00:23:34.870 "state": "enabled", 00:23:34.870 "thread": "nvmf_tgt_poll_group_000", 00:23:34.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:34.870 "listen_address": { 00:23:34.870 "trtype": "TCP", 00:23:34.870 "adrfam": "IPv4", 00:23:34.870 "traddr": "10.0.0.2", 00:23:34.870 "trsvcid": "4420" 00:23:34.870 }, 00:23:34.870 "peer_address": { 00:23:34.870 "trtype": "TCP", 00:23:34.870 "adrfam": "IPv4", 00:23:34.870 "traddr": "10.0.0.1", 00:23:34.870 "trsvcid": "42740" 00:23:34.870 }, 00:23:34.870 "auth": { 00:23:34.870 "state": "completed", 00:23:34.870 "digest": "sha512", 00:23:34.870 "dhgroup": "ffdhe3072" 00:23:34.871 } 00:23:34.871 } 00:23:34.871 ]' 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.871 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.129 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:35.129 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:35.696 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.697 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.956 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.215 00:23:36.215 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.215 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.215 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.474 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.474 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.474 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.474 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.474 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.474 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.474 { 00:23:36.474 "cntlid": 115, 00:23:36.474 "qid": 0, 00:23:36.474 "state": "enabled", 00:23:36.474 "thread": "nvmf_tgt_poll_group_000", 00:23:36.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:36.474 "listen_address": { 00:23:36.474 "trtype": "TCP", 00:23:36.475 "adrfam": "IPv4", 00:23:36.475 "traddr": "10.0.0.2", 00:23:36.475 "trsvcid": "4420" 00:23:36.475 }, 00:23:36.475 "peer_address": { 00:23:36.475 "trtype": "TCP", 00:23:36.475 "adrfam": "IPv4", 00:23:36.475 "traddr": "10.0.0.1", 00:23:36.475 "trsvcid": "54778" 00:23:36.475 }, 00:23:36.475 "auth": { 00:23:36.475 "state": "completed", 00:23:36.475 "digest": "sha512", 00:23:36.475 "dhgroup": "ffdhe3072" 00:23:36.475 } 00:23:36.475 } 00:23:36.475 ]' 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.475 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.733 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:36.733 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.302 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.562 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.563 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.823 00:23:37.823 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:37.823 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:37.823 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.084 { 00:23:38.084 "cntlid": 117, 00:23:38.084 "qid": 0, 00:23:38.084 "state": "enabled", 00:23:38.084 "thread": "nvmf_tgt_poll_group_000", 00:23:38.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:38.084 "listen_address": { 00:23:38.084 "trtype": "TCP", 00:23:38.084 "adrfam": "IPv4", 00:23:38.084 "traddr": "10.0.0.2", 00:23:38.084 "trsvcid": "4420" 00:23:38.084 }, 00:23:38.084 "peer_address": { 00:23:38.084 "trtype": "TCP", 00:23:38.084 "adrfam": "IPv4", 00:23:38.084 "traddr": "10.0.0.1", 00:23:38.084 "trsvcid": "54812" 00:23:38.084 }, 00:23:38.084 "auth": { 00:23:38.084 "state": "completed", 00:23:38.084 "digest": "sha512", 00:23:38.084 "dhgroup": "ffdhe3072" 00:23:38.084 } 00:23:38.084 } 00:23:38.084 ]' 00:23:38.084 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.085 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.345 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:38.345 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.918 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:39.179 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:39.439 00:23:39.439 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.439 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.439 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.700 { 00:23:39.700 "cntlid": 119, 00:23:39.700 "qid": 0, 00:23:39.700 "state": "enabled", 00:23:39.700 "thread": "nvmf_tgt_poll_group_000", 00:23:39.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:39.700 "listen_address": { 00:23:39.700 "trtype": "TCP", 00:23:39.700 "adrfam": "IPv4", 00:23:39.700 "traddr": "10.0.0.2", 00:23:39.700 "trsvcid": "4420" 00:23:39.700 }, 00:23:39.700 "peer_address": { 00:23:39.700 "trtype": "TCP", 00:23:39.700 "adrfam": "IPv4", 00:23:39.700 "traddr": "10.0.0.1", 00:23:39.700 "trsvcid": "54822" 00:23:39.700 }, 00:23:39.700 "auth": { 00:23:39.700 "state": "completed", 00:23:39.700 "digest": "sha512", 00:23:39.700 "dhgroup": "ffdhe3072" 00:23:39.700 } 00:23:39.700 } 00:23:39.700 ]' 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.700 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.962 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:39.962 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.533 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.794 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.055 00:23:41.055 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.055 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.055 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.315 { 00:23:41.315 "cntlid": 121, 00:23:41.315 "qid": 0, 00:23:41.315 "state": "enabled", 00:23:41.315 "thread": "nvmf_tgt_poll_group_000", 00:23:41.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:41.315 "listen_address": { 00:23:41.315 "trtype": "TCP", 00:23:41.315 "adrfam": "IPv4", 00:23:41.315 "traddr": "10.0.0.2", 00:23:41.315 "trsvcid": "4420" 00:23:41.315 }, 00:23:41.315 "peer_address": { 00:23:41.315 "trtype": "TCP", 00:23:41.315 "adrfam": "IPv4", 00:23:41.315 "traddr": "10.0.0.1", 00:23:41.315 "trsvcid": "54858" 00:23:41.315 }, 00:23:41.315 "auth": { 00:23:41.315 "state": "completed", 00:23:41.315 "digest": "sha512", 00:23:41.315 "dhgroup": "ffdhe4096" 00:23:41.315 } 00:23:41.315 } 00:23:41.315 ]' 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.315 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.576 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:41.576 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:42.146 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.146 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.146 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.146 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.146 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.147 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.147 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.147 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.408 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.668 00:23:42.668 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:42.668 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:42.668 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:42.928 { 00:23:42.928 "cntlid": 123, 00:23:42.928 "qid": 0, 00:23:42.928 "state": "enabled", 00:23:42.928 "thread": "nvmf_tgt_poll_group_000", 00:23:42.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:42.928 "listen_address": { 00:23:42.928 "trtype": "TCP", 00:23:42.928 "adrfam": "IPv4", 00:23:42.928 "traddr": "10.0.0.2", 00:23:42.928 "trsvcid": "4420" 00:23:42.928 }, 00:23:42.928 "peer_address": { 00:23:42.928 "trtype": "TCP", 00:23:42.928 "adrfam": "IPv4", 00:23:42.928 "traddr": "10.0.0.1", 00:23:42.928 "trsvcid": "54884" 00:23:42.928 }, 00:23:42.928 "auth": { 00:23:42.928 "state": "completed", 00:23:42.928 "digest": "sha512", 00:23:42.928 "dhgroup": "ffdhe4096" 00:23:42.928 } 00:23:42.928 } 00:23:42.928 ]' 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.928 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.189 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:43.189 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.758 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.019 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.279 00:23:44.279 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.279 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.279 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:44.540 { 00:23:44.540 "cntlid": 125, 00:23:44.540 "qid": 0, 00:23:44.540 "state": "enabled", 00:23:44.540 "thread": "nvmf_tgt_poll_group_000", 00:23:44.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:44.540 "listen_address": { 00:23:44.540 "trtype": "TCP", 00:23:44.540 "adrfam": "IPv4", 00:23:44.540 "traddr": "10.0.0.2", 00:23:44.540 "trsvcid": "4420" 00:23:44.540 }, 00:23:44.540 "peer_address": { 00:23:44.540 "trtype": "TCP", 00:23:44.540 "adrfam": "IPv4", 00:23:44.540 "traddr": "10.0.0.1", 00:23:44.540 "trsvcid": "54900" 00:23:44.540 }, 00:23:44.540 "auth": { 00:23:44.540 "state": "completed", 00:23:44.540 "digest": "sha512", 00:23:44.540 "dhgroup": "ffdhe4096" 00:23:44.540 } 00:23:44.540 } 00:23:44.540 ]' 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.540 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.801 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:44.801 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:45.371 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.632 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.892 00:23:45.892 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:45.892 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:45.892 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.154 { 00:23:46.154 "cntlid": 127, 00:23:46.154 "qid": 0, 00:23:46.154 "state": "enabled", 00:23:46.154 "thread": "nvmf_tgt_poll_group_000", 00:23:46.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:46.154 "listen_address": { 00:23:46.154 "trtype": "TCP", 00:23:46.154 "adrfam": "IPv4", 00:23:46.154 "traddr": "10.0.0.2", 00:23:46.154 "trsvcid": "4420" 00:23:46.154 }, 00:23:46.154 "peer_address": { 00:23:46.154 "trtype": "TCP", 00:23:46.154 "adrfam": "IPv4", 00:23:46.154 "traddr": "10.0.0.1", 00:23:46.154 "trsvcid": "48322" 00:23:46.154 }, 00:23:46.154 "auth": { 00:23:46.154 "state": "completed", 00:23:46.154 "digest": "sha512", 00:23:46.154 "dhgroup": "ffdhe4096" 00:23:46.154 } 00:23:46.154 } 00:23:46.154 ]' 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:46.154 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.414 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.414 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.414 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.414 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:46.414 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.984 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.244 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.505 00:23:47.505 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.505 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.505 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.766 { 00:23:47.766 "cntlid": 129, 00:23:47.766 "qid": 0, 00:23:47.766 "state": "enabled", 00:23:47.766 "thread": "nvmf_tgt_poll_group_000", 00:23:47.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:47.766 "listen_address": { 00:23:47.766 "trtype": "TCP", 00:23:47.766 "adrfam": "IPv4", 00:23:47.766 "traddr": "10.0.0.2", 00:23:47.766 "trsvcid": "4420" 00:23:47.766 }, 00:23:47.766 "peer_address": { 00:23:47.766 "trtype": "TCP", 00:23:47.766 "adrfam": "IPv4", 00:23:47.766 "traddr": "10.0.0.1", 00:23:47.766 "trsvcid": "48346" 00:23:47.766 }, 00:23:47.766 "auth": { 00:23:47.766 "state": "completed", 00:23:47.766 "digest": "sha512", 00:23:47.766 "dhgroup": "ffdhe6144" 00:23:47.766 } 00:23:47.766 } 00:23:47.766 ]' 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:47.766 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:48.026 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.026 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.026 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.027 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:48.027 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.969 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.231 00:23:49.231 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:49.231 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:49.231 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.492 { 00:23:49.492 "cntlid": 131, 00:23:49.492 "qid": 0, 00:23:49.492 "state": "enabled", 00:23:49.492 "thread": "nvmf_tgt_poll_group_000", 00:23:49.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:49.492 "listen_address": { 00:23:49.492 "trtype": "TCP", 00:23:49.492 "adrfam": "IPv4", 00:23:49.492 "traddr": "10.0.0.2", 00:23:49.492 "trsvcid": "4420" 00:23:49.492 }, 00:23:49.492 "peer_address": { 00:23:49.492 "trtype": "TCP", 00:23:49.492 "adrfam": "IPv4", 00:23:49.492 "traddr": "10.0.0.1", 00:23:49.492 "trsvcid": "48370" 00:23:49.492 }, 00:23:49.492 "auth": { 00:23:49.492 "state": "completed", 00:23:49.492 "digest": "sha512", 00:23:49.492 "dhgroup": "ffdhe6144" 00:23:49.492 } 00:23:49.492 } 00:23:49.492 ]' 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:49.492 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.754 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.754 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.754 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.754 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:49.754 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:50.327 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.588 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.849 00:23:51.109 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:51.109 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:51.109 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:51.109 { 00:23:51.109 "cntlid": 133, 00:23:51.109 "qid": 0, 00:23:51.109 "state": "enabled", 00:23:51.109 "thread": "nvmf_tgt_poll_group_000", 00:23:51.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:51.109 "listen_address": { 00:23:51.109 "trtype": "TCP", 00:23:51.109 "adrfam": "IPv4", 00:23:51.109 "traddr": "10.0.0.2", 00:23:51.109 "trsvcid": "4420" 00:23:51.109 }, 00:23:51.109 "peer_address": { 00:23:51.109 "trtype": "TCP", 00:23:51.109 "adrfam": "IPv4", 00:23:51.109 "traddr": "10.0.0.1", 00:23:51.109 "trsvcid": "48388" 00:23:51.109 }, 00:23:51.109 "auth": { 00:23:51.109 "state": "completed", 00:23:51.109 "digest": "sha512", 00:23:51.109 "dhgroup": "ffdhe6144" 00:23:51.109 } 00:23:51.109 } 00:23:51.109 ]' 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:51.109 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:51.368 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:51.368 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:51.368 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.368 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.368 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.627 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:51.627 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:52.197 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.457 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.717 00:23:52.717 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.717 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.717 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.977 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.977 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.978 { 00:23:52.978 "cntlid": 135, 00:23:52.978 "qid": 0, 00:23:52.978 "state": "enabled", 00:23:52.978 "thread": "nvmf_tgt_poll_group_000", 00:23:52.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:52.978 "listen_address": { 00:23:52.978 "trtype": "TCP", 00:23:52.978 "adrfam": "IPv4", 00:23:52.978 "traddr": "10.0.0.2", 00:23:52.978 "trsvcid": "4420" 00:23:52.978 }, 00:23:52.978 "peer_address": { 00:23:52.978 "trtype": "TCP", 00:23:52.978 "adrfam": "IPv4", 00:23:52.978 "traddr": "10.0.0.1", 00:23:52.978 "trsvcid": "48408" 00:23:52.978 }, 00:23:52.978 "auth": { 00:23:52.978 "state": "completed", 00:23:52.978 "digest": "sha512", 00:23:52.978 "dhgroup": "ffdhe6144" 00:23:52.978 } 00:23:52.978 } 00:23:52.978 ]' 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:52.978 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.978 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.978 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.978 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.238 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:53.238 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.825 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.084 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.652 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.652 { 00:23:54.652 "cntlid": 137, 00:23:54.652 "qid": 0, 00:23:54.652 "state": "enabled", 00:23:54.652 "thread": "nvmf_tgt_poll_group_000", 00:23:54.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:54.652 "listen_address": { 00:23:54.652 "trtype": "TCP", 00:23:54.652 "adrfam": "IPv4", 00:23:54.652 "traddr": "10.0.0.2", 00:23:54.652 "trsvcid": "4420" 00:23:54.652 }, 00:23:54.652 "peer_address": { 00:23:54.652 "trtype": "TCP", 00:23:54.652 "adrfam": "IPv4", 00:23:54.652 "traddr": "10.0.0.1", 00:23:54.652 "trsvcid": "48438" 00:23:54.652 }, 00:23:54.652 "auth": { 00:23:54.652 "state": "completed", 00:23:54.652 "digest": "sha512", 00:23:54.652 "dhgroup": "ffdhe8192" 00:23:54.652 } 00:23:54.652 } 00:23:54.652 ]' 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.652 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.912 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:54.912 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.912 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.912 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.912 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.912 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:54.912 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.854 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.423 00:23:56.423 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.423 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.423 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.683 { 00:23:56.683 "cntlid": 139, 00:23:56.683 "qid": 0, 00:23:56.683 "state": "enabled", 00:23:56.683 "thread": "nvmf_tgt_poll_group_000", 00:23:56.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:56.683 "listen_address": { 00:23:56.683 "trtype": "TCP", 00:23:56.683 "adrfam": "IPv4", 00:23:56.683 "traddr": "10.0.0.2", 00:23:56.683 "trsvcid": "4420" 00:23:56.683 }, 00:23:56.683 "peer_address": { 00:23:56.683 "trtype": "TCP", 00:23:56.683 "adrfam": "IPv4", 00:23:56.683 "traddr": "10.0.0.1", 00:23:56.683 "trsvcid": "33902" 00:23:56.683 }, 00:23:56.683 "auth": { 00:23:56.683 "state": "completed", 00:23:56.683 "digest": "sha512", 00:23:56.683 "dhgroup": "ffdhe8192" 00:23:56.683 } 00:23:56.683 } 00:23:56.683 ]' 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.683 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.943 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:56.943 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: --dhchap-ctrl-secret DHHC-1:02:OWNmNmFlNDVmNTRkOTA3YzA1MTcyOTg1ZDAzMjQ1ZWQwYzI5YzhmNzVkY2FlZTkyYSwaXA==: 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.514 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.774 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.345 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.345 { 00:23:58.345 "cntlid": 141, 00:23:58.345 "qid": 0, 00:23:58.345 "state": "enabled", 00:23:58.345 "thread": "nvmf_tgt_poll_group_000", 00:23:58.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:58.345 "listen_address": { 00:23:58.345 "trtype": "TCP", 00:23:58.345 "adrfam": "IPv4", 00:23:58.345 "traddr": "10.0.0.2", 00:23:58.345 "trsvcid": "4420" 00:23:58.345 }, 00:23:58.345 "peer_address": { 00:23:58.345 "trtype": "TCP", 00:23:58.345 "adrfam": "IPv4", 00:23:58.345 "traddr": "10.0.0.1", 00:23:58.345 "trsvcid": "33936" 00:23:58.345 }, 00:23:58.345 "auth": { 00:23:58.345 "state": "completed", 00:23:58.345 "digest": "sha512", 00:23:58.345 "dhgroup": "ffdhe8192" 00:23:58.345 } 00:23:58.345 } 00:23:58.345 ]' 00:23:58.345 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.606 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.866 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:58.866 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:01:NjRkOTEyNDFiN2Y2Mjc2MWU1NmM0MTgyODRlODYzN2XG1NfH: 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:59.436 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:59.696 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:00.008 00:24:00.008 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:00.008 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:00.008 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.298 { 00:24:00.298 "cntlid": 143, 00:24:00.298 "qid": 0, 00:24:00.298 "state": "enabled", 00:24:00.298 "thread": "nvmf_tgt_poll_group_000", 00:24:00.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:00.298 "listen_address": { 00:24:00.298 "trtype": "TCP", 00:24:00.298 "adrfam": "IPv4", 00:24:00.298 "traddr": "10.0.0.2", 00:24:00.298 "trsvcid": "4420" 00:24:00.298 }, 00:24:00.298 "peer_address": { 00:24:00.298 "trtype": "TCP", 00:24:00.298 "adrfam": "IPv4", 00:24:00.298 "traddr": "10.0.0.1", 00:24:00.298 "trsvcid": "33952" 00:24:00.298 }, 00:24:00.298 "auth": { 00:24:00.298 "state": "completed", 00:24:00.298 "digest": "sha512", 00:24:00.298 "dhgroup": "ffdhe8192" 00:24:00.298 } 00:24:00.298 } 00:24:00.298 ]' 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.298 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.560 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:24:00.560 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:24:01.131 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.391 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.961 00:24:01.961 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:01.961 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:01.961 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.221 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.221 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:02.222 { 00:24:02.222 "cntlid": 145, 00:24:02.222 "qid": 0, 00:24:02.222 "state": "enabled", 00:24:02.222 "thread": "nvmf_tgt_poll_group_000", 00:24:02.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:02.222 "listen_address": { 00:24:02.222 "trtype": "TCP", 00:24:02.222 "adrfam": "IPv4", 00:24:02.222 "traddr": "10.0.0.2", 00:24:02.222 "trsvcid": "4420" 00:24:02.222 }, 00:24:02.222 "peer_address": { 00:24:02.222 "trtype": "TCP", 00:24:02.222 "adrfam": "IPv4", 00:24:02.222 "traddr": "10.0.0.1", 00:24:02.222 "trsvcid": "33978" 00:24:02.222 }, 00:24:02.222 "auth": { 00:24:02.222 "state": "completed", 00:24:02.222 "digest": "sha512", 00:24:02.222 "dhgroup": "ffdhe8192" 00:24:02.222 } 00:24:02.222 } 00:24:02.222 ]' 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.222 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.483 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:24:02.483 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWYzZTYwZDRkN2NkYTQ2MzNiMTJkYTcyNmNiOTQzOTI0MzQ5M2Y4ZThlMDY5ZjQwNoniAg==: --dhchap-ctrl-secret DHHC-1:03:MDg1Yzg3MTVlNGIyMjBhN2YzN2RjY2VkY2U2MTBhZjdhMGE1ZmIyMGFjZmNlNDQ1M2IxNzBmYmEyZTUyYmViZYRhU0I=: 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:03.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:03.053 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:03.314 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.314 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:03.314 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.314 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:03.314 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:03.314 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:03.574 request: 00:24:03.574 { 00:24:03.574 "name": "nvme0", 00:24:03.574 "trtype": "tcp", 00:24:03.574 "traddr": "10.0.0.2", 00:24:03.574 "adrfam": "ipv4", 00:24:03.574 "trsvcid": "4420", 00:24:03.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:03.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:03.574 "prchk_reftag": false, 00:24:03.574 "prchk_guard": false, 00:24:03.574 "hdgst": false, 00:24:03.574 "ddgst": false, 00:24:03.574 "dhchap_key": "key2", 00:24:03.574 "allow_unrecognized_csi": false, 00:24:03.574 "method": "bdev_nvme_attach_controller", 00:24:03.575 "req_id": 1 00:24:03.575 } 00:24:03.575 Got JSON-RPC error response 00:24:03.575 response: 00:24:03.575 { 00:24:03.575 "code": -5, 00:24:03.575 "message": "Input/output error" 00:24:03.575 } 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.575 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:04.145 request: 00:24:04.145 { 00:24:04.145 "name": "nvme0", 00:24:04.145 "trtype": "tcp", 00:24:04.145 "traddr": "10.0.0.2", 00:24:04.145 "adrfam": "ipv4", 00:24:04.145 "trsvcid": "4420", 00:24:04.145 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:04.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:04.145 "prchk_reftag": false, 00:24:04.145 "prchk_guard": false, 00:24:04.145 "hdgst": false, 00:24:04.145 "ddgst": false, 00:24:04.145 "dhchap_key": "key1", 00:24:04.145 "dhchap_ctrlr_key": "ckey2", 00:24:04.145 "allow_unrecognized_csi": false, 00:24:04.145 "method": "bdev_nvme_attach_controller", 00:24:04.145 "req_id": 1 00:24:04.145 } 00:24:04.145 Got JSON-RPC error response 00:24:04.145 response: 00:24:04.145 { 00:24:04.145 "code": -5, 00:24:04.145 "message": "Input/output error" 00:24:04.145 } 00:24:04.145 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:04.145 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.146 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.716 request: 00:24:04.716 { 00:24:04.716 "name": "nvme0", 00:24:04.716 "trtype": "tcp", 00:24:04.716 "traddr": "10.0.0.2", 00:24:04.716 "adrfam": "ipv4", 00:24:04.716 "trsvcid": "4420", 00:24:04.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:04.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:04.716 "prchk_reftag": false, 00:24:04.716 "prchk_guard": false, 00:24:04.716 "hdgst": false, 00:24:04.716 "ddgst": false, 00:24:04.716 "dhchap_key": "key1", 00:24:04.716 "dhchap_ctrlr_key": "ckey1", 00:24:04.716 "allow_unrecognized_csi": false, 00:24:04.716 "method": "bdev_nvme_attach_controller", 00:24:04.717 "req_id": 1 00:24:04.717 } 00:24:04.717 Got JSON-RPC error response 00:24:04.717 response: 00:24:04.717 { 00:24:04.717 "code": -5, 00:24:04.717 "message": "Input/output error" 00:24:04.717 } 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3402007 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3402007 ']' 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3402007 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3402007 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3402007' 00:24:04.717 killing process with pid 3402007 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3402007 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3402007 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3428060 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3428060 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3428060 ']' 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.717 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3428060 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3428060 ']' 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.658 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 null0 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WtY 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.4vW ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4vW 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kpF 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.uow ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uow 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Wx3 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.muB ]] 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.muB 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UlF 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.919 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.920 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:06.861 nvme0n1 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.861 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:06.861 { 00:24:06.861 "cntlid": 1, 00:24:06.861 "qid": 0, 00:24:06.861 "state": "enabled", 00:24:06.861 "thread": "nvmf_tgt_poll_group_000", 00:24:06.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:06.861 "listen_address": { 00:24:06.861 "trtype": "TCP", 00:24:06.861 "adrfam": "IPv4", 00:24:06.861 "traddr": "10.0.0.2", 00:24:06.861 "trsvcid": "4420" 00:24:06.861 }, 00:24:06.861 "peer_address": { 00:24:06.861 "trtype": "TCP", 00:24:06.861 "adrfam": "IPv4", 00:24:06.861 "traddr": "10.0.0.1", 00:24:06.861 "trsvcid": "37746" 00:24:06.861 }, 00:24:06.861 "auth": { 00:24:06.861 "state": "completed", 00:24:06.861 "digest": "sha512", 00:24:06.862 "dhgroup": "ffdhe8192" 00:24:06.862 } 00:24:06.862 } 00:24:06.862 ]' 00:24:06.862 12:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:07.122 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.383 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:24:07.383 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:07.954 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.215 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.215 request: 00:24:08.215 { 00:24:08.215 "name": "nvme0", 00:24:08.215 "trtype": "tcp", 00:24:08.215 "traddr": "10.0.0.2", 00:24:08.215 "adrfam": "ipv4", 00:24:08.215 "trsvcid": "4420", 00:24:08.215 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:08.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:08.215 "prchk_reftag": false, 00:24:08.215 "prchk_guard": false, 00:24:08.215 "hdgst": false, 00:24:08.215 "ddgst": false, 00:24:08.215 "dhchap_key": "key3", 00:24:08.215 "allow_unrecognized_csi": false, 00:24:08.215 "method": "bdev_nvme_attach_controller", 00:24:08.215 "req_id": 1 00:24:08.215 } 00:24:08.215 Got JSON-RPC error response 00:24:08.215 response: 00:24:08.215 { 00:24:08.215 "code": -5, 00:24:08.215 "message": "Input/output error" 00:24:08.215 } 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.476 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.736 request: 00:24:08.736 { 00:24:08.736 "name": "nvme0", 00:24:08.736 "trtype": "tcp", 00:24:08.736 "traddr": "10.0.0.2", 00:24:08.736 "adrfam": "ipv4", 00:24:08.737 "trsvcid": "4420", 00:24:08.737 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:08.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:08.737 "prchk_reftag": false, 00:24:08.737 "prchk_guard": false, 00:24:08.737 "hdgst": false, 00:24:08.737 "ddgst": false, 00:24:08.737 "dhchap_key": "key3", 00:24:08.737 "allow_unrecognized_csi": false, 00:24:08.737 "method": "bdev_nvme_attach_controller", 00:24:08.737 "req_id": 1 00:24:08.737 } 00:24:08.737 Got JSON-RPC error response 00:24:08.737 response: 00:24:08.737 { 00:24:08.737 "code": -5, 00:24:08.737 "message": "Input/output error" 00:24:08.737 } 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:08.737 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.997 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:09.258 request: 00:24:09.258 { 00:24:09.258 "name": "nvme0", 00:24:09.258 "trtype": "tcp", 00:24:09.258 "traddr": "10.0.0.2", 00:24:09.258 "adrfam": "ipv4", 00:24:09.258 "trsvcid": "4420", 00:24:09.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:09.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:09.258 "prchk_reftag": false, 00:24:09.258 "prchk_guard": false, 00:24:09.258 "hdgst": false, 00:24:09.258 "ddgst": false, 00:24:09.258 "dhchap_key": "key0", 00:24:09.258 "dhchap_ctrlr_key": "key1", 00:24:09.258 "allow_unrecognized_csi": false, 00:24:09.258 "method": "bdev_nvme_attach_controller", 00:24:09.258 "req_id": 1 00:24:09.258 } 00:24:09.258 Got JSON-RPC error response 00:24:09.258 response: 00:24:09.258 { 00:24:09.258 "code": -5, 00:24:09.258 "message": "Input/output error" 00:24:09.258 } 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:09.258 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:09.518 nvme0n1 00:24:09.518 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:09.518 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:09.518 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:09.778 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:10.719 nvme0n1 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:10.719 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.979 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.979 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:24:10.979 12:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: --dhchap-ctrl-secret DHHC-1:03:MTRjZDdiMWMwNWU0M2IzN2YxYmRmZTgxNjhkNGIzZjlmMDI5YTE4NjU0MTY2NGM4YTdjODI3NGZkYjFmMjQwZGVRjIE=: 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.549 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:11.810 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:12.381 request: 00:24:12.381 { 00:24:12.381 "name": "nvme0", 00:24:12.381 "trtype": "tcp", 00:24:12.381 "traddr": "10.0.0.2", 00:24:12.381 "adrfam": "ipv4", 00:24:12.381 "trsvcid": "4420", 00:24:12.381 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:12.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:12.381 "prchk_reftag": false, 00:24:12.381 "prchk_guard": false, 00:24:12.381 "hdgst": false, 00:24:12.381 "ddgst": false, 00:24:12.381 "dhchap_key": "key1", 00:24:12.381 "allow_unrecognized_csi": false, 00:24:12.381 "method": "bdev_nvme_attach_controller", 00:24:12.381 "req_id": 1 00:24:12.381 } 00:24:12.381 Got JSON-RPC error response 00:24:12.381 response: 00:24:12.381 { 00:24:12.381 "code": -5, 00:24:12.381 "message": "Input/output error" 00:24:12.381 } 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.381 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:12.952 nvme0n1 00:24:12.952 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:12.952 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:12.952 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.212 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.212 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.212 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:13.472 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:13.472 nvme0n1 00:24:13.732 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:13.732 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:13.732 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.732 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.732 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.732 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.992 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:13.992 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.993 12:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: '' 2s 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: ]] 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODk4OWY1MTA2MGQ3OGJiZjljMDQ3NmQ1ZTk3Y2VjNzXT+2ja: 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:13.993 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:15.903 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:15.903 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:15.903 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:15.903 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: 2s 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: ]] 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yjg5NjYzZTkzNTBjNGI1YmJlYWRmM2M2NzAwNzNiZmRhN2YyNjQ1ZTQzZTZiYWM1PxMYuw==: 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:16.163 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:18.078 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:19.018 nvme0n1 00:24:19.018 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.018 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.018 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.018 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.018 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.018 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.279 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:19.279 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:19.279 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:19.539 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:19.800 12:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:20.371 request: 00:24:20.371 { 00:24:20.371 "name": "nvme0", 00:24:20.371 "dhchap_key": "key1", 00:24:20.371 "dhchap_ctrlr_key": "key3", 00:24:20.371 "method": "bdev_nvme_set_keys", 00:24:20.371 "req_id": 1 00:24:20.371 } 00:24:20.371 Got JSON-RPC error response 00:24:20.371 response: 00:24:20.371 { 00:24:20.371 "code": -13, 00:24:20.371 "message": "Permission denied" 00:24:20.371 } 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:20.371 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.631 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:20.631 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:21.571 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:21.571 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:21.571 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:21.831 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:21.832 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:22.402 nvme0n1 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.402 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:22.973 request: 00:24:22.973 { 00:24:22.973 "name": "nvme0", 00:24:22.973 "dhchap_key": "key2", 00:24:22.973 "dhchap_ctrlr_key": "key0", 00:24:22.973 "method": "bdev_nvme_set_keys", 00:24:22.973 "req_id": 1 00:24:22.973 } 00:24:22.973 Got JSON-RPC error response 00:24:22.973 response: 00:24:22.973 { 00:24:22.973 "code": -13, 00:24:22.973 "message": "Permission denied" 00:24:22.973 } 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.973 12:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:23.233 12:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:23.233 12:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:24.174 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:24.174 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:24.174 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3402075 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3402075 ']' 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3402075 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3402075 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3402075' 00:24:24.434 killing process with pid 3402075 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3402075 00:24:24.434 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3402075 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.695 rmmod nvme_tcp 00:24:24.695 rmmod nvme_fabrics 00:24:24.695 rmmod nvme_keyring 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3428060 ']' 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3428060 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3428060 ']' 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3428060 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3428060 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.695 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3428060' 00:24:24.696 killing process with pid 3428060 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3428060 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3428060 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.696 12:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WtY /tmp/spdk.key-sha256.kpF /tmp/spdk.key-sha384.Wx3 /tmp/spdk.key-sha512.UlF /tmp/spdk.key-sha512.4vW /tmp/spdk.key-sha384.uow /tmp/spdk.key-sha256.muB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:27.239 00:24:27.239 real 2m36.911s 00:24:27.239 user 5m52.893s 00:24:27.239 sys 0m24.484s 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 ************************************ 00:24:27.239 END TEST nvmf_auth_target 00:24:27.239 ************************************ 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 ************************************ 00:24:27.239 START TEST nvmf_bdevio_no_huge 00:24:27.239 ************************************ 00:24:27.239 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:27.239 * Looking for test storage... 00:24:27.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:27.239 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.240 --rc genhtml_branch_coverage=1 00:24:27.240 --rc genhtml_function_coverage=1 00:24:27.240 --rc genhtml_legend=1 00:24:27.240 --rc geninfo_all_blocks=1 00:24:27.240 --rc geninfo_unexecuted_blocks=1 00:24:27.240 00:24:27.240 ' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.240 --rc genhtml_branch_coverage=1 00:24:27.240 --rc genhtml_function_coverage=1 00:24:27.240 --rc genhtml_legend=1 00:24:27.240 --rc geninfo_all_blocks=1 00:24:27.240 --rc geninfo_unexecuted_blocks=1 00:24:27.240 00:24:27.240 ' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.240 --rc genhtml_branch_coverage=1 00:24:27.240 --rc genhtml_function_coverage=1 00:24:27.240 --rc genhtml_legend=1 00:24:27.240 --rc geninfo_all_blocks=1 00:24:27.240 --rc geninfo_unexecuted_blocks=1 00:24:27.240 00:24:27.240 ' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.240 --rc genhtml_branch_coverage=1 00:24:27.240 --rc genhtml_function_coverage=1 00:24:27.240 --rc genhtml_legend=1 00:24:27.240 --rc geninfo_all_blocks=1 00:24:27.240 --rc geninfo_unexecuted_blocks=1 00:24:27.240 00:24:27.240 ' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.240 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.377 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:35.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:35.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:35.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:35.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:35.378 00:24:35.378 --- 10.0.0.2 ping statistics --- 00:24:35.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.378 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:24:35.378 00:24:35.378 --- 10.0.0.1 ping statistics --- 00:24:35.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.378 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3436201 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3436201 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3436201 ']' 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.378 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.378 [2024-11-28 12:55:04.825353] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:35.378 [2024-11-28 12:55:04.825426] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:35.378 [2024-11-28 12:55:04.984882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:35.378 [2024-11-28 12:55:05.033493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:35.378 [2024-11-28 12:55:05.079371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.378 [2024-11-28 12:55:05.079413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.378 [2024-11-28 12:55:05.079421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.379 [2024-11-28 12:55:05.079428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.379 [2024-11-28 12:55:05.079435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.379 [2024-11-28 12:55:05.081223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:35.379 [2024-11-28 12:55:05.081390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:35.379 [2024-11-28 12:55:05.081540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:35.379 [2024-11-28 12:55:05.081543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.639 [2024-11-28 12:55:05.705378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.639 Malloc0 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.639 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:35.640 [2024-11-28 12:55:05.759142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.640 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:35.901 { 00:24:35.901 "params": { 00:24:35.901 "name": "Nvme$subsystem", 00:24:35.901 "trtype": "$TEST_TRANSPORT", 00:24:35.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.901 "adrfam": "ipv4", 00:24:35.901 "trsvcid": "$NVMF_PORT", 00:24:35.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.901 "hdgst": ${hdgst:-false}, 00:24:35.901 "ddgst": ${ddgst:-false} 00:24:35.901 }, 00:24:35.901 "method": "bdev_nvme_attach_controller" 00:24:35.901 } 00:24:35.901 EOF 00:24:35.901 )") 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:24:35.901 12:55:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:35.901 "params": { 00:24:35.901 "name": "Nvme1", 00:24:35.901 "trtype": "tcp", 00:24:35.901 "traddr": "10.0.0.2", 00:24:35.901 "adrfam": "ipv4", 00:24:35.901 "trsvcid": "4420", 00:24:35.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.901 "hdgst": false, 00:24:35.901 "ddgst": false 00:24:35.901 }, 00:24:35.901 "method": "bdev_nvme_attach_controller" 00:24:35.901 }' 00:24:35.901 [2024-11-28 12:55:05.816444] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:35.901 [2024-11-28 12:55:05.816517] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3436556 ] 00:24:35.901 [2024-11-28 12:55:05.964887] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:35.901 [2024-11-28 12:55:06.013684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.162 [2024-11-28 12:55:06.059712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.162 [2024-11-28 12:55:06.059873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.162 [2024-11-28 12:55:06.059874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.423 I/O targets: 00:24:36.423 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:36.423 00:24:36.423 00:24:36.423 CUnit - A unit testing framework for C - Version 2.1-3 00:24:36.423 http://cunit.sourceforge.net/ 00:24:36.423 00:24:36.423 00:24:36.423 Suite: bdevio tests on: Nvme1n1 00:24:36.423 Test: blockdev write read block ...passed 00:24:36.423 Test: blockdev write zeroes read block ...passed 00:24:36.423 Test: blockdev write zeroes read no split ...passed 00:24:36.423 Test: blockdev write zeroes read split ...passed 00:24:36.423 Test: blockdev write zeroes read split partial ...passed 00:24:36.423 Test: blockdev reset ...[2024-11-28 12:55:06.539077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:36.423 [2024-11-28 12:55:06.539191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c2160 (9): Bad file descriptor 00:24:36.685 [2024-11-28 12:55:06.597439] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:36.685 passed 00:24:36.685 Test: blockdev write read 8 blocks ...passed 00:24:36.685 Test: blockdev write read size > 128k ...passed 00:24:36.685 Test: blockdev write read invalid size ...passed 00:24:36.685 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.685 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.685 Test: blockdev write read max offset ...passed 00:24:36.685 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.946 Test: blockdev writev readv 8 blocks ...passed 00:24:36.946 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.946 Test: blockdev writev readv block ...passed 00:24:36.946 Test: blockdev writev readv size > 128k ...passed 00:24:36.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.946 Test: blockdev comparev and writev ...[2024-11-28 12:55:06.865181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.946 [2024-11-28 12:55:06.865240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.946 [2024-11-28 12:55:06.865258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.946 [2024-11-28 12:55:06.865267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.946 [2024-11-28 12:55:06.865839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.946 [2024-11-28 12:55:06.865852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:36.946 [2024-11-28 12:55:06.865868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.946 [2024-11-28 12:55:06.865877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:36.946 [2024-11-28 12:55:06.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.946 [2024-11-28 12:55:06.866429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:36.946 [2024-11-28 12:55:06.866444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.947 [2024-11-28 12:55:06.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:36.947 [2024-11-28 12:55:06.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.947 [2024-11-28 12:55:06.867014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:36.947 [2024-11-28 12:55:06.867028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:36.947 [2024-11-28 12:55:06.867037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:36.947 passed 00:24:36.947 Test: blockdev nvme passthru rw ...passed 00:24:36.947 Test: blockdev nvme passthru vendor specific ...[2024-11-28 12:55:06.952005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:36.947 [2024-11-28 12:55:06.952080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:36.947 [2024-11-28 12:55:06.952458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:36.947 [2024-11-28 12:55:06.952472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:36.947 [2024-11-28 12:55:06.952843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:36.947 [2024-11-28 12:55:06.952855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:36.947 [2024-11-28 12:55:06.953231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:36.947 [2024-11-28 12:55:06.953243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:36.947 passed 00:24:36.947 Test: blockdev nvme admin passthru ...passed 00:24:36.947 Test: blockdev copy ...passed 00:24:36.947 00:24:36.947 Run Summary: Type Total Ran Passed Failed Inactive 00:24:36.947 suites 1 1 n/a 0 0 00:24:36.947 tests 23 23 23 0 0 00:24:36.947 asserts 152 152 152 0 n/a 00:24:36.947 00:24:36.947 Elapsed time = 1.310 seconds 00:24:37.209 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.209 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.209 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.470 rmmod nvme_tcp 00:24:37.470 rmmod nvme_fabrics 00:24:37.470 rmmod nvme_keyring 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3436201 ']' 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3436201 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3436201 ']' 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3436201 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3436201 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3436201' 00:24:37.470 killing process with pid 3436201 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3436201 00:24:37.470 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3436201 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.732 12:55:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.298 00:24:40.298 real 0m12.881s 00:24:40.298 user 0m15.372s 00:24:40.298 sys 0m6.800s 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.298 ************************************ 00:24:40.298 END TEST nvmf_bdevio_no_huge 00:24:40.298 ************************************ 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.298 ************************************ 00:24:40.298 START TEST nvmf_tls 00:24:40.298 ************************************ 00:24:40.298 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:40.298 * Looking for test storage... 00:24:40.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.298 --rc genhtml_branch_coverage=1 00:24:40.298 --rc genhtml_function_coverage=1 00:24:40.298 --rc genhtml_legend=1 00:24:40.298 --rc geninfo_all_blocks=1 00:24:40.298 --rc geninfo_unexecuted_blocks=1 00:24:40.298 00:24:40.298 ' 00:24:40.298 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.298 --rc genhtml_branch_coverage=1 00:24:40.298 --rc genhtml_function_coverage=1 00:24:40.298 --rc genhtml_legend=1 00:24:40.298 --rc geninfo_all_blocks=1 00:24:40.298 --rc geninfo_unexecuted_blocks=1 00:24:40.298 00:24:40.299 ' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.299 --rc genhtml_branch_coverage=1 00:24:40.299 --rc genhtml_function_coverage=1 00:24:40.299 --rc genhtml_legend=1 00:24:40.299 --rc geninfo_all_blocks=1 00:24:40.299 --rc geninfo_unexecuted_blocks=1 00:24:40.299 00:24:40.299 ' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.299 --rc genhtml_branch_coverage=1 00:24:40.299 --rc genhtml_function_coverage=1 00:24:40.299 --rc genhtml_legend=1 00:24:40.299 --rc geninfo_all_blocks=1 00:24:40.299 --rc geninfo_unexecuted_blocks=1 00:24:40.299 00:24:40.299 ' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.299 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:24:48.444 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:48.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:48.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:48.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:48.445 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:24:48.445 00:24:48.445 --- 10.0.0.2 ping statistics --- 00:24:48.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.445 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:48.445 00:24:48.445 --- 10.0.0.1 ping statistics --- 00:24:48.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.445 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3440973 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3440973 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3440973 ']' 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.445 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.445 [2024-11-28 12:55:17.805877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:48.445 [2024-11-28 12:55:17.805943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.445 [2024-11-28 12:55:17.954518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:48.445 [2024-11-28 12:55:18.011970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.445 [2024-11-28 12:55:18.037975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.445 [2024-11-28 12:55:18.038016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.445 [2024-11-28 12:55:18.038024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.445 [2024-11-28 12:55:18.038031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.445 [2024-11-28 12:55:18.038038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.445 [2024-11-28 12:55:18.038777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:48.705 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:48.964 true 00:24:48.964 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:48.964 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:48.964 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:48.964 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:48.964 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:49.225 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:49.225 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:49.485 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:49.485 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:49.485 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:49.485 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:49.485 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:49.747 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:49.747 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:49.747 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:49.747 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:50.008 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:50.008 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:50.008 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:50.009 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:50.009 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:50.269 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:50.269 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:50.270 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:50.530 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:50.530 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:50.791 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.IhtLLfysZr 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.j0hK2fNyD6 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.IhtLLfysZr 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.j0hK2fNyD6 00:24:50.792 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:51.053 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:51.314 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.IhtLLfysZr 00:24:51.314 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IhtLLfysZr 00:24:51.314 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:51.314 [2024-11-28 12:55:21.356357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.314 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:51.576 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:51.576 [2024-11-28 12:55:21.692381] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.576 [2024-11-28 12:55:21.692593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.837 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:51.837 malloc0 00:24:51.837 12:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:52.165 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IhtLLfysZr 00:24:52.165 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:52.482 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IhtLLfysZr 00:25:02.525 Initializing NVMe Controllers 00:25:02.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:02.525 Initialization complete. Launching workers. 00:25:02.525 ======================================================== 00:25:02.525 Latency(us) 00:25:02.525 Device Information : IOPS MiB/s Average min max 00:25:02.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18645.98 72.84 3432.60 1073.55 4022.39 00:25:02.525 ======================================================== 00:25:02.525 Total : 18645.98 72.84 3432.60 1073.55 4022.39 00:25:02.525 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhtLLfysZr 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IhtLLfysZr 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3443962 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3443962 /var/tmp/bdevperf.sock 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3443962 ']' 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.525 12:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.525 [2024-11-28 12:55:32.640945] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:02.525 [2024-11-28 12:55:32.641003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443962 ] 00:25:02.786 [2024-11-28 12:55:32.773786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:02.786 [2024-11-28 12:55:32.830808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.786 [2024-11-28 12:55:32.848553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.360 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.360 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:03.360 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IhtLLfysZr 00:25:03.622 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:03.622 [2024-11-28 12:55:33.716566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.883 TLSTESTn1 00:25:03.883 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:03.883 Running I/O for 10 seconds... 00:25:06.210 4960.00 IOPS, 19.38 MiB/s [2024-11-28T11:55:36.910Z] 5459.00 IOPS, 21.32 MiB/s [2024-11-28T11:55:38.298Z] 5393.33 IOPS, 21.07 MiB/s [2024-11-28T11:55:39.243Z] 5521.25 IOPS, 21.57 MiB/s [2024-11-28T11:55:40.185Z] 5217.40 IOPS, 20.38 MiB/s [2024-11-28T11:55:41.128Z] 5342.33 IOPS, 20.87 MiB/s [2024-11-28T11:55:42.069Z] 5377.86 IOPS, 21.01 MiB/s [2024-11-28T11:55:43.010Z] 5481.75 IOPS, 21.41 MiB/s [2024-11-28T11:55:43.952Z] 5506.33 IOPS, 21.51 MiB/s [2024-11-28T11:55:43.952Z] 5587.70 IOPS, 21.83 MiB/s 00:25:13.825 Latency(us) 00:25:13.825 [2024-11-28T11:55:43.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:13.825 Verification LBA range: start 0x0 length 0x2000 00:25:13.825 TLSTESTn1 : 10.02 5591.56 21.84 0.00 0.00 22846.59 5857.29 79702.99 00:25:13.825 [2024-11-28T11:55:43.952Z] =================================================================================================================== 00:25:13.825 [2024-11-28T11:55:43.952Z] Total : 5591.56 21.84 0.00 0.00 22846.59 5857.29 79702.99 00:25:13.825 { 00:25:13.825 "results": [ 00:25:13.825 { 00:25:13.825 "job": "TLSTESTn1", 00:25:13.825 "core_mask": "0x4", 00:25:13.825 "workload": "verify", 00:25:13.825 "status": "finished", 00:25:13.825 "verify_range": { 00:25:13.825 "start": 0, 00:25:13.825 "length": 8192 00:25:13.825 }, 00:25:13.825 "queue_depth": 128, 00:25:13.825 "io_size": 4096, 00:25:13.825 "runtime": 10.015632, 00:25:13.825 "iops": 5591.559274542036, 00:25:13.825 "mibps": 21.84202841617983, 00:25:13.825 "io_failed": 0, 00:25:13.825 "io_timeout": 0, 00:25:13.825 "avg_latency_us": 22846.59193529553, 00:25:13.825 "min_latency_us": 5857.293685265619, 00:25:13.825 "max_latency_us": 79702.98696959572 00:25:13.825 } 00:25:13.825 ], 00:25:13.825 "core_count": 1 00:25:13.825 } 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3443962 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3443962 ']' 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3443962 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.825 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3443962 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3443962' 00:25:14.086 killing process with pid 3443962 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3443962 00:25:14.086 Received shutdown signal, test time was about 10.000000 seconds 00:25:14.086 00:25:14.086 Latency(us) 00:25:14.086 [2024-11-28T11:55:44.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.086 [2024-11-28T11:55:44.213Z] =================================================================================================================== 00:25:14.086 [2024-11-28T11:55:44.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3443962 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j0hK2fNyD6 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j0hK2fNyD6 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j0hK2fNyD6 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.j0hK2fNyD6 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3446147 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3446147 /var/tmp/bdevperf.sock 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3446147 ']' 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.086 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.086 [2024-11-28 12:55:44.156978] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:14.086 [2024-11-28 12:55:44.157037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446147 ] 00:25:14.347 [2024-11-28 12:55:44.289466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:14.347 [2024-11-28 12:55:44.342503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.347 [2024-11-28 12:55:44.358558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.916 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.916 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:14.916 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j0hK2fNyD6 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:15.186 [2024-11-28 12:55:45.242479] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:15.186 [2024-11-28 12:55:45.246970] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:15.186 [2024-11-28 12:55:45.247593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65ebc0 (107): Transport endpoint is not connected 00:25:15.186 [2024-11-28 12:55:45.248585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65ebc0 (9): Bad file descriptor 00:25:15.186 [2024-11-28 12:55:45.249586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:15.186 [2024-11-28 12:55:45.249598] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:15.186 [2024-11-28 12:55:45.249604] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:15.186 [2024-11-28 12:55:45.249610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:15.186 request: 00:25:15.186 { 00:25:15.186 "name": "TLSTEST", 00:25:15.186 "trtype": "tcp", 00:25:15.186 "traddr": "10.0.0.2", 00:25:15.186 "adrfam": "ipv4", 00:25:15.186 "trsvcid": "4420", 00:25:15.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.186 "prchk_reftag": false, 00:25:15.186 "prchk_guard": false, 00:25:15.186 "hdgst": false, 00:25:15.186 "ddgst": false, 00:25:15.186 "psk": "key0", 00:25:15.186 "allow_unrecognized_csi": false, 00:25:15.186 "method": "bdev_nvme_attach_controller", 00:25:15.186 "req_id": 1 00:25:15.186 } 00:25:15.186 Got JSON-RPC error response 00:25:15.186 response: 00:25:15.186 { 00:25:15.186 "code": -5, 00:25:15.186 "message": "Input/output error" 00:25:15.186 } 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3446147 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3446147 ']' 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3446147 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.186 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446147 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446147' 00:25:15.450 killing process with pid 3446147 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3446147 00:25:15.450 Received shutdown signal, test time was about 10.000000 seconds 00:25:15.450 00:25:15.450 Latency(us) 00:25:15.450 [2024-11-28T11:55:45.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.450 [2024-11-28T11:55:45.577Z] =================================================================================================================== 00:25:15.450 [2024-11-28T11:55:45.577Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3446147 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IhtLLfysZr 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IhtLLfysZr 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IhtLLfysZr 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IhtLLfysZr 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3446328 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3446328 /var/tmp/bdevperf.sock 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3446328 ']' 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.450 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.450 [2024-11-28 12:55:45.486753] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:15.450 [2024-11-28 12:55:45.486810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446328 ] 00:25:15.710 [2024-11-28 12:55:45.619353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:15.710 [2024-11-28 12:55:45.672057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.710 [2024-11-28 12:55:45.686309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.280 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.280 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:16.280 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IhtLLfysZr 00:25:16.541 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:16.541 [2024-11-28 12:55:46.614498] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.541 [2024-11-28 12:55:46.619127] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:16.541 [2024-11-28 12:55:46.619145] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:16.541 [2024-11-28 12:55:46.619169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:16.541 [2024-11-28 12:55:46.619848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfefbc0 (107): Transport endpoint is not connected 00:25:16.541 [2024-11-28 12:55:46.620841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfefbc0 (9): Bad file descriptor 00:25:16.541 [2024-11-28 12:55:46.621840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:16.541 [2024-11-28 12:55:46.621847] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:16.541 [2024-11-28 12:55:46.621853] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:16.541 [2024-11-28 12:55:46.621859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:16.541 request: 00:25:16.541 { 00:25:16.541 "name": "TLSTEST", 00:25:16.541 "trtype": "tcp", 00:25:16.541 "traddr": "10.0.0.2", 00:25:16.541 "adrfam": "ipv4", 00:25:16.541 "trsvcid": "4420", 00:25:16.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.541 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:16.541 "prchk_reftag": false, 00:25:16.541 "prchk_guard": false, 00:25:16.541 "hdgst": false, 00:25:16.541 "ddgst": false, 00:25:16.541 "psk": "key0", 00:25:16.541 "allow_unrecognized_csi": false, 00:25:16.541 "method": "bdev_nvme_attach_controller", 00:25:16.541 "req_id": 1 00:25:16.541 } 00:25:16.541 Got JSON-RPC error response 00:25:16.541 response: 00:25:16.541 { 00:25:16.541 "code": -5, 00:25:16.541 "message": "Input/output error" 00:25:16.541 } 00:25:16.541 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3446328 00:25:16.541 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3446328 ']' 00:25:16.541 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3446328 00:25:16.541 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:16.542 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.542 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446328 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446328' 00:25:16.803 killing process with pid 3446328 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3446328 00:25:16.803 Received shutdown signal, test time was about 10.000000 seconds 00:25:16.803 00:25:16.803 Latency(us) 00:25:16.803 [2024-11-28T11:55:46.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.803 [2024-11-28T11:55:46.930Z] =================================================================================================================== 00:25:16.803 [2024-11-28T11:55:46.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3446328 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhtLLfysZr 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhtLLfysZr 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IhtLLfysZr 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IhtLLfysZr 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3446668 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3446668 /var/tmp/bdevperf.sock 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3446668 ']' 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.803 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.803 [2024-11-28 12:55:46.864130] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:16.803 [2024-11-28 12:55:46.864191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446668 ] 00:25:17.063 [2024-11-28 12:55:46.996759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:17.063 [2024-11-28 12:55:47.050930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.063 [2024-11-28 12:55:47.065480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.633 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.633 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:17.634 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IhtLLfysZr 00:25:17.894 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:17.894 [2024-11-28 12:55:47.997462] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.894 [2024-11-28 12:55:48.007031] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:17.894 [2024-11-28 12:55:48.007054] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:17.894 [2024-11-28 12:55:48.007072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:17.894 [2024-11-28 12:55:48.007751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbdbc0 (107): Transport endpoint is not connected 00:25:17.894 [2024-11-28 12:55:48.008745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbdbc0 (9): Bad file descriptor 00:25:17.895 [2024-11-28 12:55:48.009745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:25:17.895 [2024-11-28 12:55:48.009752] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:17.895 [2024-11-28 12:55:48.009758] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:17.895 [2024-11-28 12:55:48.009765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:25:17.895 request: 00:25:17.895 { 00:25:17.895 "name": "TLSTEST", 00:25:17.895 "trtype": "tcp", 00:25:17.895 "traddr": "10.0.0.2", 00:25:17.895 "adrfam": "ipv4", 00:25:17.895 "trsvcid": "4420", 00:25:17.895 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:17.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.895 "prchk_reftag": false, 00:25:17.895 "prchk_guard": false, 00:25:17.895 "hdgst": false, 00:25:17.895 "ddgst": false, 00:25:17.895 "psk": "key0", 00:25:17.895 "allow_unrecognized_csi": false, 00:25:17.895 "method": "bdev_nvme_attach_controller", 00:25:17.895 "req_id": 1 00:25:17.895 } 00:25:17.895 Got JSON-RPC error response 00:25:17.895 response: 00:25:17.895 { 00:25:17.895 "code": -5, 00:25:17.895 "message": "Input/output error" 00:25:17.895 } 00:25:18.156 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3446668 00:25:18.156 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3446668 ']' 00:25:18.156 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3446668 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446668 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446668' 00:25:18.157 killing process with pid 3446668 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3446668 00:25:18.157 Received shutdown signal, test time was about 10.000000 seconds 00:25:18.157 00:25:18.157 Latency(us) 00:25:18.157 [2024-11-28T11:55:48.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.157 [2024-11-28T11:55:48.284Z] =================================================================================================================== 00:25:18.157 [2024-11-28T11:55:48.284Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3446668 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3447012 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3447012 /var/tmp/bdevperf.sock 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3447012 ']' 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.157 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.157 [2024-11-28 12:55:48.254746] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:18.157 [2024-11-28 12:55:48.254802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447012 ] 00:25:18.418 [2024-11-28 12:55:48.387566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:18.418 [2024-11-28 12:55:48.441079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.418 [2024-11-28 12:55:48.455475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.990 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.990 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:18.990 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:25:19.250 [2024-11-28 12:55:49.207255] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:25:19.250 [2024-11-28 12:55:49.207278] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:19.250 request: 00:25:19.250 { 00:25:19.250 "name": "key0", 00:25:19.250 "path": "", 00:25:19.250 "method": "keyring_file_add_key", 00:25:19.250 "req_id": 1 00:25:19.250 } 00:25:19.250 Got JSON-RPC error response 00:25:19.250 response: 00:25:19.250 { 00:25:19.250 "code": -1, 00:25:19.250 "message": "Operation not permitted" 00:25:19.250 } 00:25:19.250 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:19.511 [2024-11-28 12:55:49.395368] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.511 [2024-11-28 12:55:49.395389] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:19.511 request: 00:25:19.511 { 00:25:19.511 "name": "TLSTEST", 00:25:19.511 "trtype": "tcp", 00:25:19.511 "traddr": "10.0.0.2", 00:25:19.511 "adrfam": "ipv4", 00:25:19.511 "trsvcid": "4420", 00:25:19.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.511 "prchk_reftag": false, 00:25:19.511 "prchk_guard": false, 00:25:19.511 "hdgst": false, 00:25:19.511 "ddgst": false, 00:25:19.511 "psk": "key0", 00:25:19.511 "allow_unrecognized_csi": false, 00:25:19.511 "method": "bdev_nvme_attach_controller", 00:25:19.511 "req_id": 1 00:25:19.511 } 00:25:19.511 Got JSON-RPC error response 00:25:19.511 response: 00:25:19.511 { 00:25:19.511 "code": -126, 00:25:19.511 "message": "Required key not available" 00:25:19.511 } 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3447012 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3447012 ']' 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3447012 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447012 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447012' 00:25:19.511 killing process with pid 3447012 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3447012 00:25:19.511 Received shutdown signal, test time was about 10.000000 seconds 00:25:19.511 00:25:19.511 Latency(us) 00:25:19.511 [2024-11-28T11:55:49.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.511 [2024-11-28T11:55:49.638Z] =================================================================================================================== 00:25:19.511 [2024-11-28T11:55:49.638Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3447012 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3440973 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3440973 ']' 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3440973 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.511 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440973 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440973' 00:25:19.772 killing process with pid 3440973 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3440973 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3440973 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.d3G3e1sSNa 00:25:19.772 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.d3G3e1sSNa 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3447363 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3447363 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3447363 ']' 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.773 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.773 [2024-11-28 12:55:49.864674] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:19.773 [2024-11-28 12:55:49.864730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.033 [2024-11-28 12:55:50.004143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:20.033 [2024-11-28 12:55:50.060294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.033 [2024-11-28 12:55:50.082286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.033 [2024-11-28 12:55:50.082326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.033 [2024-11-28 12:55:50.082333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.033 [2024-11-28 12:55:50.082339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.033 [2024-11-28 12:55:50.082344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.033 [2024-11-28 12:55:50.082855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.d3G3e1sSNa 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.d3G3e1sSNa 00:25:20.603 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:20.864 [2024-11-28 12:55:50.868286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.864 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:21.124 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:21.124 [2024-11-28 12:55:51.240342] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:21.124 [2024-11-28 12:55:51.240532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.384 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:21.384 malloc0 00:25:21.384 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:21.644 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d3G3e1sSNa 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.d3G3e1sSNa 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3447731 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3447731 /var/tmp/bdevperf.sock 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3447731 ']' 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.905 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:22.165 [2024-11-28 12:55:52.032455] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:22.166 [2024-11-28 12:55:52.032509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447731 ] 00:25:22.166 [2024-11-28 12:55:52.165225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:22.166 [2024-11-28 12:55:52.217854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.166 [2024-11-28 12:55:52.233786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.737 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.737 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:22.737 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:22.997 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:23.257 [2024-11-28 12:55:53.173701] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.257 TLSTESTn1 00:25:23.257 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:23.257 Running I/O for 10 seconds... 00:25:25.580 4767.00 IOPS, 18.62 MiB/s [2024-11-28T11:55:56.648Z] 4906.00 IOPS, 19.16 MiB/s [2024-11-28T11:55:57.588Z] 5303.67 IOPS, 20.72 MiB/s [2024-11-28T11:55:58.530Z] 5356.50 IOPS, 20.92 MiB/s [2024-11-28T11:55:59.473Z] 5420.80 IOPS, 21.18 MiB/s [2024-11-28T11:56:00.416Z] 5338.00 IOPS, 20.85 MiB/s [2024-11-28T11:56:01.799Z] 5467.71 IOPS, 21.36 MiB/s [2024-11-28T11:56:02.369Z] 5456.00 IOPS, 21.31 MiB/s [2024-11-28T11:56:03.756Z] 5512.33 IOPS, 21.53 MiB/s [2024-11-28T11:56:03.756Z] 5551.30 IOPS, 21.68 MiB/s 00:25:33.629 Latency(us) 00:25:33.629 [2024-11-28T11:56:03.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.629 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:33.629 Verification LBA range: start 0x0 length 0x2000 00:25:33.629 TLSTESTn1 : 10.02 5555.10 21.70 0.00 0.00 23008.05 5063.55 65251.35 00:25:33.629 [2024-11-28T11:56:03.756Z] =================================================================================================================== 00:25:33.629 [2024-11-28T11:56:03.756Z] Total : 5555.10 21.70 0.00 0.00 23008.05 5063.55 65251.35 00:25:33.629 { 00:25:33.629 "results": [ 00:25:33.629 { 00:25:33.629 "job": "TLSTESTn1", 00:25:33.629 "core_mask": "0x4", 00:25:33.629 "workload": "verify", 00:25:33.629 "status": "finished", 00:25:33.629 "verify_range": { 00:25:33.629 "start": 0, 00:25:33.629 "length": 8192 00:25:33.629 }, 00:25:33.629 "queue_depth": 128, 00:25:33.629 "io_size": 4096, 00:25:33.629 "runtime": 10.016198, 00:25:33.629 "iops": 5555.101846029801, 00:25:33.629 "mibps": 21.69961658605391, 00:25:33.629 "io_failed": 0, 00:25:33.629 "io_timeout": 0, 00:25:33.629 "avg_latency_us": 23008.05417057406, 00:25:33.629 "min_latency_us": 5063.548279318409, 00:25:33.629 "max_latency_us": 65251.346475108585 00:25:33.629 } 00:25:33.629 ], 00:25:33.629 "core_count": 1 00:25:33.629 } 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3447731 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3447731 ']' 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3447731 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447731 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:33.629 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447731' 00:25:33.630 killing process with pid 3447731 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3447731 00:25:33.630 Received shutdown signal, test time was about 10.000000 seconds 00:25:33.630 00:25:33.630 Latency(us) 00:25:33.630 [2024-11-28T11:56:03.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.630 [2024-11-28T11:56:03.757Z] =================================================================================================================== 00:25:33.630 [2024-11-28T11:56:03.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3447731 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.d3G3e1sSNa 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d3G3e1sSNa 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d3G3e1sSNa 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d3G3e1sSNa 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.d3G3e1sSNa 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3450067 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3450067 /var/tmp/bdevperf.sock 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450067 ']' 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.630 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.630 [2024-11-28 12:56:03.628281] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:33.630 [2024-11-28 12:56:03.628340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450067 ] 00:25:33.891 [2024-11-28 12:56:03.761025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:33.891 [2024-11-28 12:56:03.815944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.891 [2024-11-28 12:56:03.830374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.462 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.462 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:34.462 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:34.462 [2024-11-28 12:56:04.582260] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.d3G3e1sSNa': 0100666 00:25:34.462 [2024-11-28 12:56:04.582285] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:34.462 request: 00:25:34.462 { 00:25:34.462 "name": "key0", 00:25:34.462 "path": "/tmp/tmp.d3G3e1sSNa", 00:25:34.462 "method": "keyring_file_add_key", 00:25:34.462 "req_id": 1 00:25:34.462 } 00:25:34.462 Got JSON-RPC error response 00:25:34.462 response: 00:25:34.462 { 00:25:34.462 "code": -1, 00:25:34.462 "message": "Operation not permitted" 00:25:34.462 } 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:34.723 [2024-11-28 12:56:04.758382] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:34.723 [2024-11-28 12:56:04.758406] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:34.723 request: 00:25:34.723 { 00:25:34.723 "name": "TLSTEST", 00:25:34.723 "trtype": "tcp", 00:25:34.723 "traddr": "10.0.0.2", 00:25:34.723 "adrfam": "ipv4", 00:25:34.723 "trsvcid": "4420", 00:25:34.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.723 "prchk_reftag": false, 00:25:34.723 "prchk_guard": false, 00:25:34.723 "hdgst": false, 00:25:34.723 "ddgst": false, 00:25:34.723 "psk": "key0", 00:25:34.723 "allow_unrecognized_csi": false, 00:25:34.723 "method": "bdev_nvme_attach_controller", 00:25:34.723 "req_id": 1 00:25:34.723 } 00:25:34.723 Got JSON-RPC error response 00:25:34.723 response: 00:25:34.723 { 00:25:34.723 "code": -126, 00:25:34.723 "message": "Required key not available" 00:25:34.723 } 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3450067 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450067 ']' 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450067 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.723 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450067 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450067' 00:25:34.987 killing process with pid 3450067 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450067 00:25:34.987 Received shutdown signal, test time was about 10.000000 seconds 00:25:34.987 00:25:34.987 Latency(us) 00:25:34.987 [2024-11-28T11:56:05.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.987 [2024-11-28T11:56:05.114Z] =================================================================================================================== 00:25:34.987 [2024-11-28T11:56:05.114Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450067 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3447363 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3447363 ']' 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3447363 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.987 12:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447363 00:25:34.987 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:34.987 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:34.987 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447363' 00:25:34.987 killing process with pid 3447363 00:25:34.987 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3447363 00:25:34.987 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3447363 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3450328 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3450328 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450328 ']' 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.250 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 [2024-11-28 12:56:05.188856] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:35.250 [2024-11-28 12:56:05.188919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.250 [2024-11-28 12:56:05.330026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:35.510 [2024-11-28 12:56:05.382978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.510 [2024-11-28 12:56:05.403263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.510 [2024-11-28 12:56:05.403300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.510 [2024-11-28 12:56:05.403307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.510 [2024-11-28 12:56:05.403313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.510 [2024-11-28 12:56:05.403318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.510 [2024-11-28 12:56:05.403955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.081 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.081 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:36.081 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.081 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.081 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.d3G3e1sSNa 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.d3G3e1sSNa 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.d3G3e1sSNa 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.d3G3e1sSNa 00:25:36.081 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:36.081 [2024-11-28 12:56:06.190626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.342 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:36.342 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:36.602 [2024-11-28 12:56:06.550679] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:36.602 [2024-11-28 12:56:06.550865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.602 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:36.862 malloc0 00:25:36.862 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:36.862 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:37.122 [2024-11-28 12:56:07.104431] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.d3G3e1sSNa': 0100666 00:25:37.122 [2024-11-28 12:56:07.104450] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:37.122 request: 00:25:37.122 { 00:25:37.122 "name": "key0", 00:25:37.122 "path": "/tmp/tmp.d3G3e1sSNa", 00:25:37.122 "method": "keyring_file_add_key", 00:25:37.122 "req_id": 1 00:25:37.122 } 00:25:37.122 Got JSON-RPC error response 00:25:37.122 response: 00:25:37.122 { 00:25:37.122 "code": -1, 00:25:37.122 "message": "Operation not permitted" 00:25:37.122 } 00:25:37.122 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:37.382 [2024-11-28 12:56:07.288479] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:37.382 [2024-11-28 12:56:07.288511] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:37.382 request: 00:25:37.382 { 00:25:37.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.382 "host": "nqn.2016-06.io.spdk:host1", 00:25:37.382 "psk": "key0", 00:25:37.382 "method": "nvmf_subsystem_add_host", 00:25:37.382 "req_id": 1 00:25:37.382 } 00:25:37.382 Got JSON-RPC error response 00:25:37.382 response: 00:25:37.382 { 00:25:37.382 "code": -32603, 00:25:37.382 "message": "Internal error" 00:25:37.382 } 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3450328 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450328 ']' 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450328 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450328 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450328' 00:25:37.382 killing process with pid 3450328 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450328 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450328 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.d3G3e1sSNa 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.382 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3450795 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3450795 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450795 ']' 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.383 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:37.643 [2024-11-28 12:56:07.528666] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:37.643 [2024-11-28 12:56:07.528724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.643 [2024-11-28 12:56:07.669117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:37.643 [2024-11-28 12:56:07.724110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.643 [2024-11-28 12:56:07.740865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.643 [2024-11-28 12:56:07.740897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.643 [2024-11-28 12:56:07.740902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.643 [2024-11-28 12:56:07.740908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.643 [2024-11-28 12:56:07.740912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.643 [2024-11-28 12:56:07.741476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.213 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.213 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:38.213 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.213 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.213 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:38.474 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.474 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.d3G3e1sSNa 00:25:38.474 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.d3G3e1sSNa 00:25:38.474 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:38.474 [2024-11-28 12:56:08.513207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.474 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:38.734 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:38.995 [2024-11-28 12:56:08.885248] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:38.995 [2024-11-28 12:56:08.885446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.995 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:38.995 malloc0 00:25:38.995 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:39.254 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3451161 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3451161 /var/tmp/bdevperf.sock 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451161 ']' 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.514 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.774 [2024-11-28 12:56:09.677822] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:39.774 [2024-11-28 12:56:09.677877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451161 ] 00:25:39.774 [2024-11-28 12:56:09.810512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:39.774 [2024-11-28 12:56:09.868561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.774 [2024-11-28 12:56:09.886055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.769 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.769 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:40.769 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:40.769 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:40.769 [2024-11-28 12:56:10.822053] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:41.070 TLSTESTn1 00:25:41.070 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:41.070 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:41.070 "subsystems": [ 00:25:41.070 { 00:25:41.070 "subsystem": "keyring", 00:25:41.070 "config": [ 00:25:41.070 { 00:25:41.070 "method": "keyring_file_add_key", 00:25:41.070 "params": { 00:25:41.070 "name": "key0", 00:25:41.070 "path": "/tmp/tmp.d3G3e1sSNa" 00:25:41.070 } 00:25:41.070 } 00:25:41.070 ] 00:25:41.070 }, 00:25:41.070 { 00:25:41.070 "subsystem": "iobuf", 00:25:41.070 "config": [ 00:25:41.070 { 00:25:41.070 "method": "iobuf_set_options", 00:25:41.070 "params": { 00:25:41.070 "small_pool_count": 8192, 00:25:41.070 "large_pool_count": 1024, 00:25:41.070 "small_bufsize": 8192, 00:25:41.070 "large_bufsize": 135168, 00:25:41.070 "enable_numa": false 00:25:41.070 } 00:25:41.070 } 00:25:41.070 ] 00:25:41.070 }, 00:25:41.070 { 00:25:41.070 "subsystem": "sock", 00:25:41.070 "config": [ 00:25:41.070 { 00:25:41.070 "method": "sock_set_default_impl", 00:25:41.070 "params": { 00:25:41.070 "impl_name": "posix" 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "sock_impl_set_options", 00:25:41.071 "params": { 00:25:41.071 "impl_name": "ssl", 00:25:41.071 "recv_buf_size": 4096, 00:25:41.071 "send_buf_size": 4096, 00:25:41.071 "enable_recv_pipe": true, 00:25:41.071 "enable_quickack": false, 00:25:41.071 "enable_placement_id": 0, 00:25:41.071 "enable_zerocopy_send_server": true, 00:25:41.071 "enable_zerocopy_send_client": false, 00:25:41.071 "zerocopy_threshold": 0, 00:25:41.071 "tls_version": 0, 00:25:41.071 "enable_ktls": false 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "sock_impl_set_options", 00:25:41.071 "params": { 00:25:41.071 "impl_name": "posix", 00:25:41.071 "recv_buf_size": 2097152, 00:25:41.071 "send_buf_size": 2097152, 00:25:41.071 "enable_recv_pipe": true, 00:25:41.071 "enable_quickack": false, 00:25:41.071 "enable_placement_id": 0, 00:25:41.071 "enable_zerocopy_send_server": true, 00:25:41.071 "enable_zerocopy_send_client": false, 00:25:41.071 "zerocopy_threshold": 0, 00:25:41.071 "tls_version": 0, 00:25:41.071 "enable_ktls": false 00:25:41.071 } 00:25:41.071 } 00:25:41.071 ] 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "subsystem": "vmd", 00:25:41.071 "config": [] 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "subsystem": "accel", 00:25:41.071 "config": [ 00:25:41.071 { 00:25:41.071 "method": "accel_set_options", 00:25:41.071 "params": { 00:25:41.071 "small_cache_size": 128, 00:25:41.071 "large_cache_size": 16, 00:25:41.071 "task_count": 2048, 00:25:41.071 "sequence_count": 2048, 00:25:41.071 "buf_count": 2048 00:25:41.071 } 00:25:41.071 } 00:25:41.071 ] 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "subsystem": "bdev", 00:25:41.071 "config": [ 00:25:41.071 { 00:25:41.071 "method": "bdev_set_options", 00:25:41.071 "params": { 00:25:41.071 "bdev_io_pool_size": 65535, 00:25:41.071 "bdev_io_cache_size": 256, 00:25:41.071 "bdev_auto_examine": true, 00:25:41.071 "iobuf_small_cache_size": 128, 00:25:41.071 "iobuf_large_cache_size": 16 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "bdev_raid_set_options", 00:25:41.071 "params": { 00:25:41.071 "process_window_size_kb": 1024, 00:25:41.071 "process_max_bandwidth_mb_sec": 0 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "bdev_iscsi_set_options", 00:25:41.071 "params": { 00:25:41.071 "timeout_sec": 30 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "bdev_nvme_set_options", 00:25:41.071 "params": { 00:25:41.071 "action_on_timeout": "none", 00:25:41.071 "timeout_us": 0, 00:25:41.071 "timeout_admin_us": 0, 00:25:41.071 "keep_alive_timeout_ms": 10000, 00:25:41.071 "arbitration_burst": 0, 00:25:41.071 "low_priority_weight": 0, 00:25:41.071 "medium_priority_weight": 0, 00:25:41.071 "high_priority_weight": 0, 00:25:41.071 "nvme_adminq_poll_period_us": 10000, 00:25:41.071 "nvme_ioq_poll_period_us": 0, 00:25:41.071 "io_queue_requests": 0, 00:25:41.071 "delay_cmd_submit": true, 00:25:41.071 "transport_retry_count": 4, 00:25:41.071 "bdev_retry_count": 3, 00:25:41.071 "transport_ack_timeout": 0, 00:25:41.071 "ctrlr_loss_timeout_sec": 0, 00:25:41.071 "reconnect_delay_sec": 0, 00:25:41.071 "fast_io_fail_timeout_sec": 0, 00:25:41.071 "disable_auto_failback": false, 00:25:41.071 "generate_uuids": false, 00:25:41.071 "transport_tos": 0, 00:25:41.071 "nvme_error_stat": false, 00:25:41.071 "rdma_srq_size": 0, 00:25:41.071 "io_path_stat": false, 00:25:41.071 "allow_accel_sequence": false, 00:25:41.071 "rdma_max_cq_size": 0, 00:25:41.071 "rdma_cm_event_timeout_ms": 0, 00:25:41.071 "dhchap_digests": [ 00:25:41.071 "sha256", 00:25:41.071 "sha384", 00:25:41.071 "sha512" 00:25:41.071 ], 00:25:41.071 "dhchap_dhgroups": [ 00:25:41.071 "null", 00:25:41.071 "ffdhe2048", 00:25:41.071 "ffdhe3072", 00:25:41.071 "ffdhe4096", 00:25:41.071 "ffdhe6144", 00:25:41.071 "ffdhe8192" 00:25:41.071 ] 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "bdev_nvme_set_hotplug", 00:25:41.071 "params": { 00:25:41.071 "period_us": 100000, 00:25:41.071 "enable": false 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "bdev_malloc_create", 00:25:41.071 "params": { 00:25:41.071 "name": "malloc0", 00:25:41.071 "num_blocks": 8192, 00:25:41.071 "block_size": 4096, 00:25:41.071 "physical_block_size": 4096, 00:25:41.071 "uuid": "bfce56a5-dfb1-4493-b52b-9e541fcb34ff", 00:25:41.071 "optimal_io_boundary": 0, 00:25:41.071 "md_size": 0, 00:25:41.071 "dif_type": 0, 00:25:41.071 "dif_is_head_of_md": false, 00:25:41.071 "dif_pi_format": 0 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "bdev_wait_for_examine" 00:25:41.071 } 00:25:41.071 ] 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "subsystem": "nbd", 00:25:41.071 "config": [] 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "subsystem": "scheduler", 00:25:41.071 "config": [ 00:25:41.071 { 00:25:41.071 "method": "framework_set_scheduler", 00:25:41.071 "params": { 00:25:41.071 "name": "static" 00:25:41.071 } 00:25:41.071 } 00:25:41.071 ] 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "subsystem": "nvmf", 00:25:41.071 "config": [ 00:25:41.071 { 00:25:41.071 "method": "nvmf_set_config", 00:25:41.071 "params": { 00:25:41.071 "discovery_filter": "match_any", 00:25:41.071 "admin_cmd_passthru": { 00:25:41.071 "identify_ctrlr": false 00:25:41.071 }, 00:25:41.071 "dhchap_digests": [ 00:25:41.071 "sha256", 00:25:41.071 "sha384", 00:25:41.071 "sha512" 00:25:41.071 ], 00:25:41.071 "dhchap_dhgroups": [ 00:25:41.071 "null", 00:25:41.071 "ffdhe2048", 00:25:41.071 "ffdhe3072", 00:25:41.071 "ffdhe4096", 00:25:41.071 "ffdhe6144", 00:25:41.071 "ffdhe8192" 00:25:41.071 ] 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "nvmf_set_max_subsystems", 00:25:41.071 "params": { 00:25:41.071 "max_subsystems": 1024 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "nvmf_set_crdt", 00:25:41.071 "params": { 00:25:41.071 "crdt1": 0, 00:25:41.071 "crdt2": 0, 00:25:41.071 "crdt3": 0 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "nvmf_create_transport", 00:25:41.071 "params": { 00:25:41.071 "trtype": "TCP", 00:25:41.071 "max_queue_depth": 128, 00:25:41.071 "max_io_qpairs_per_ctrlr": 127, 00:25:41.071 "in_capsule_data_size": 4096, 00:25:41.071 "max_io_size": 131072, 00:25:41.071 "io_unit_size": 131072, 00:25:41.071 "max_aq_depth": 128, 00:25:41.071 "num_shared_buffers": 511, 00:25:41.071 "buf_cache_size": 4294967295, 00:25:41.071 "dif_insert_or_strip": false, 00:25:41.071 "zcopy": false, 00:25:41.071 "c2h_success": false, 00:25:41.071 "sock_priority": 0, 00:25:41.071 "abort_timeout_sec": 1, 00:25:41.071 "ack_timeout": 0, 00:25:41.071 "data_wr_pool_size": 0 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "nvmf_create_subsystem", 00:25:41.071 "params": { 00:25:41.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.071 "allow_any_host": false, 00:25:41.071 "serial_number": "SPDK00000000000001", 00:25:41.071 "model_number": "SPDK bdev Controller", 00:25:41.071 "max_namespaces": 10, 00:25:41.071 "min_cntlid": 1, 00:25:41.071 "max_cntlid": 65519, 00:25:41.071 "ana_reporting": false 00:25:41.071 } 00:25:41.071 }, 00:25:41.071 { 00:25:41.071 "method": "nvmf_subsystem_add_host", 00:25:41.071 "params": { 00:25:41.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.072 "host": "nqn.2016-06.io.spdk:host1", 00:25:41.072 "psk": "key0" 00:25:41.072 } 00:25:41.072 }, 00:25:41.072 { 00:25:41.072 "method": "nvmf_subsystem_add_ns", 00:25:41.072 "params": { 00:25:41.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.072 "namespace": { 00:25:41.072 "nsid": 1, 00:25:41.072 "bdev_name": "malloc0", 00:25:41.072 "nguid": "BFCE56A5DFB14493B52B9E541FCB34FF", 00:25:41.072 "uuid": "bfce56a5-dfb1-4493-b52b-9e541fcb34ff", 00:25:41.072 "no_auto_visible": false 00:25:41.072 } 00:25:41.072 } 00:25:41.072 }, 00:25:41.072 { 00:25:41.072 "method": "nvmf_subsystem_add_listener", 00:25:41.072 "params": { 00:25:41.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.072 "listen_address": { 00:25:41.072 "trtype": "TCP", 00:25:41.072 "adrfam": "IPv4", 00:25:41.072 "traddr": "10.0.0.2", 00:25:41.072 "trsvcid": "4420" 00:25:41.072 }, 00:25:41.072 "secure_channel": true 00:25:41.072 } 00:25:41.072 } 00:25:41.072 ] 00:25:41.072 } 00:25:41.072 ] 00:25:41.072 }' 00:25:41.072 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:41.383 "subsystems": [ 00:25:41.383 { 00:25:41.383 "subsystem": "keyring", 00:25:41.383 "config": [ 00:25:41.383 { 00:25:41.383 "method": "keyring_file_add_key", 00:25:41.383 "params": { 00:25:41.383 "name": "key0", 00:25:41.383 "path": "/tmp/tmp.d3G3e1sSNa" 00:25:41.383 } 00:25:41.383 } 00:25:41.383 ] 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "subsystem": "iobuf", 00:25:41.383 "config": [ 00:25:41.383 { 00:25:41.383 "method": "iobuf_set_options", 00:25:41.383 "params": { 00:25:41.383 "small_pool_count": 8192, 00:25:41.383 "large_pool_count": 1024, 00:25:41.383 "small_bufsize": 8192, 00:25:41.383 "large_bufsize": 135168, 00:25:41.383 "enable_numa": false 00:25:41.383 } 00:25:41.383 } 00:25:41.383 ] 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "subsystem": "sock", 00:25:41.383 "config": [ 00:25:41.383 { 00:25:41.383 "method": "sock_set_default_impl", 00:25:41.383 "params": { 00:25:41.383 "impl_name": "posix" 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "sock_impl_set_options", 00:25:41.383 "params": { 00:25:41.383 "impl_name": "ssl", 00:25:41.383 "recv_buf_size": 4096, 00:25:41.383 "send_buf_size": 4096, 00:25:41.383 "enable_recv_pipe": true, 00:25:41.383 "enable_quickack": false, 00:25:41.383 "enable_placement_id": 0, 00:25:41.383 "enable_zerocopy_send_server": true, 00:25:41.383 "enable_zerocopy_send_client": false, 00:25:41.383 "zerocopy_threshold": 0, 00:25:41.383 "tls_version": 0, 00:25:41.383 "enable_ktls": false 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "sock_impl_set_options", 00:25:41.383 "params": { 00:25:41.383 "impl_name": "posix", 00:25:41.383 "recv_buf_size": 2097152, 00:25:41.383 "send_buf_size": 2097152, 00:25:41.383 "enable_recv_pipe": true, 00:25:41.383 "enable_quickack": false, 00:25:41.383 "enable_placement_id": 0, 00:25:41.383 "enable_zerocopy_send_server": true, 00:25:41.383 "enable_zerocopy_send_client": false, 00:25:41.383 "zerocopy_threshold": 0, 00:25:41.383 "tls_version": 0, 00:25:41.383 "enable_ktls": false 00:25:41.383 } 00:25:41.383 } 00:25:41.383 ] 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "subsystem": "vmd", 00:25:41.383 "config": [] 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "subsystem": "accel", 00:25:41.383 "config": [ 00:25:41.383 { 00:25:41.383 "method": "accel_set_options", 00:25:41.383 "params": { 00:25:41.383 "small_cache_size": 128, 00:25:41.383 "large_cache_size": 16, 00:25:41.383 "task_count": 2048, 00:25:41.383 "sequence_count": 2048, 00:25:41.383 "buf_count": 2048 00:25:41.383 } 00:25:41.383 } 00:25:41.383 ] 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "subsystem": "bdev", 00:25:41.383 "config": [ 00:25:41.383 { 00:25:41.383 "method": "bdev_set_options", 00:25:41.383 "params": { 00:25:41.383 "bdev_io_pool_size": 65535, 00:25:41.383 "bdev_io_cache_size": 256, 00:25:41.383 "bdev_auto_examine": true, 00:25:41.383 "iobuf_small_cache_size": 128, 00:25:41.383 "iobuf_large_cache_size": 16 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "bdev_raid_set_options", 00:25:41.383 "params": { 00:25:41.383 "process_window_size_kb": 1024, 00:25:41.383 "process_max_bandwidth_mb_sec": 0 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "bdev_iscsi_set_options", 00:25:41.383 "params": { 00:25:41.383 "timeout_sec": 30 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "bdev_nvme_set_options", 00:25:41.383 "params": { 00:25:41.383 "action_on_timeout": "none", 00:25:41.383 "timeout_us": 0, 00:25:41.383 "timeout_admin_us": 0, 00:25:41.383 "keep_alive_timeout_ms": 10000, 00:25:41.383 "arbitration_burst": 0, 00:25:41.383 "low_priority_weight": 0, 00:25:41.383 "medium_priority_weight": 0, 00:25:41.383 "high_priority_weight": 0, 00:25:41.383 "nvme_adminq_poll_period_us": 10000, 00:25:41.383 "nvme_ioq_poll_period_us": 0, 00:25:41.383 "io_queue_requests": 512, 00:25:41.383 "delay_cmd_submit": true, 00:25:41.383 "transport_retry_count": 4, 00:25:41.383 "bdev_retry_count": 3, 00:25:41.383 "transport_ack_timeout": 0, 00:25:41.383 "ctrlr_loss_timeout_sec": 0, 00:25:41.383 "reconnect_delay_sec": 0, 00:25:41.383 "fast_io_fail_timeout_sec": 0, 00:25:41.383 "disable_auto_failback": false, 00:25:41.383 "generate_uuids": false, 00:25:41.383 "transport_tos": 0, 00:25:41.383 "nvme_error_stat": false, 00:25:41.383 "rdma_srq_size": 0, 00:25:41.383 "io_path_stat": false, 00:25:41.383 "allow_accel_sequence": false, 00:25:41.383 "rdma_max_cq_size": 0, 00:25:41.383 "rdma_cm_event_timeout_ms": 0, 00:25:41.383 "dhchap_digests": [ 00:25:41.383 "sha256", 00:25:41.383 "sha384", 00:25:41.383 "sha512" 00:25:41.383 ], 00:25:41.383 "dhchap_dhgroups": [ 00:25:41.383 "null", 00:25:41.383 "ffdhe2048", 00:25:41.383 "ffdhe3072", 00:25:41.383 "ffdhe4096", 00:25:41.383 "ffdhe6144", 00:25:41.383 "ffdhe8192" 00:25:41.383 ] 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "bdev_nvme_attach_controller", 00:25:41.383 "params": { 00:25:41.383 "name": "TLSTEST", 00:25:41.383 "trtype": "TCP", 00:25:41.383 "adrfam": "IPv4", 00:25:41.383 "traddr": "10.0.0.2", 00:25:41.383 "trsvcid": "4420", 00:25:41.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.383 "prchk_reftag": false, 00:25:41.383 "prchk_guard": false, 00:25:41.383 "ctrlr_loss_timeout_sec": 0, 00:25:41.383 "reconnect_delay_sec": 0, 00:25:41.383 "fast_io_fail_timeout_sec": 0, 00:25:41.383 "psk": "key0", 00:25:41.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.383 "hdgst": false, 00:25:41.383 "ddgst": false, 00:25:41.383 "multipath": "multipath" 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "bdev_nvme_set_hotplug", 00:25:41.383 "params": { 00:25:41.383 "period_us": 100000, 00:25:41.383 "enable": false 00:25:41.383 } 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "method": "bdev_wait_for_examine" 00:25:41.383 } 00:25:41.383 ] 00:25:41.383 }, 00:25:41.383 { 00:25:41.383 "subsystem": "nbd", 00:25:41.383 "config": [] 00:25:41.383 } 00:25:41.383 ] 00:25:41.383 }' 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3451161 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451161 ']' 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451161 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.383 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451161 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451161' 00:25:41.644 killing process with pid 3451161 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451161 00:25:41.644 Received shutdown signal, test time was about 10.000000 seconds 00:25:41.644 00:25:41.644 Latency(us) 00:25:41.644 [2024-11-28T11:56:11.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.644 [2024-11-28T11:56:11.771Z] =================================================================================================================== 00:25:41.644 [2024-11-28T11:56:11.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451161 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3450795 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450795 ']' 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450795 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450795 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450795' 00:25:41.644 killing process with pid 3450795 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450795 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450795 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.644 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:41.644 "subsystems": [ 00:25:41.644 { 00:25:41.644 "subsystem": "keyring", 00:25:41.644 "config": [ 00:25:41.644 { 00:25:41.644 "method": "keyring_file_add_key", 00:25:41.644 "params": { 00:25:41.644 "name": "key0", 00:25:41.644 "path": "/tmp/tmp.d3G3e1sSNa" 00:25:41.644 } 00:25:41.644 } 00:25:41.644 ] 00:25:41.644 }, 00:25:41.644 { 00:25:41.644 "subsystem": "iobuf", 00:25:41.644 "config": [ 00:25:41.644 { 00:25:41.644 "method": "iobuf_set_options", 00:25:41.644 "params": { 00:25:41.644 "small_pool_count": 8192, 00:25:41.644 "large_pool_count": 1024, 00:25:41.644 "small_bufsize": 8192, 00:25:41.644 "large_bufsize": 135168, 00:25:41.644 "enable_numa": false 00:25:41.644 } 00:25:41.644 } 00:25:41.644 ] 00:25:41.644 }, 00:25:41.645 { 00:25:41.645 "subsystem": "sock", 00:25:41.645 "config": [ 00:25:41.645 { 00:25:41.645 "method": "sock_set_default_impl", 00:25:41.645 "params": { 00:25:41.645 "impl_name": "posix" 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "sock_impl_set_options", 00:25:41.645 "params": { 00:25:41.645 "impl_name": "ssl", 00:25:41.645 "recv_buf_size": 4096, 00:25:41.645 "send_buf_size": 4096, 00:25:41.645 "enable_recv_pipe": true, 00:25:41.645 "enable_quickack": false, 00:25:41.645 "enable_placement_id": 0, 00:25:41.645 "enable_zerocopy_send_server": true, 00:25:41.645 "enable_zerocopy_send_client": false, 00:25:41.645 "zerocopy_threshold": 0, 00:25:41.645 "tls_version": 0, 00:25:41.645 "enable_ktls": false 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "sock_impl_set_options", 00:25:41.645 "params": { 00:25:41.645 "impl_name": "posix", 00:25:41.645 "recv_buf_size": 2097152, 00:25:41.645 "send_buf_size": 2097152, 00:25:41.645 "enable_recv_pipe": true, 00:25:41.645 "enable_quickack": false, 00:25:41.645 "enable_placement_id": 0, 00:25:41.645 "enable_zerocopy_send_server": true, 00:25:41.645 "enable_zerocopy_send_client": false, 00:25:41.645 "zerocopy_threshold": 0, 00:25:41.645 "tls_version": 0, 00:25:41.645 "enable_ktls": false 00:25:41.645 } 00:25:41.645 } 00:25:41.645 ] 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "subsystem": "vmd", 00:25:41.645 "config": [] 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "subsystem": "accel", 00:25:41.645 "config": [ 00:25:41.645 { 00:25:41.645 "method": "accel_set_options", 00:25:41.645 "params": { 00:25:41.645 "small_cache_size": 128, 00:25:41.645 "large_cache_size": 16, 00:25:41.645 "task_count": 2048, 00:25:41.645 "sequence_count": 2048, 00:25:41.645 "buf_count": 2048 00:25:41.645 } 00:25:41.645 } 00:25:41.645 ] 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "subsystem": "bdev", 00:25:41.645 "config": [ 00:25:41.645 { 00:25:41.645 "method": "bdev_set_options", 00:25:41.645 "params": { 00:25:41.645 "bdev_io_pool_size": 65535, 00:25:41.645 "bdev_io_cache_size": 256, 00:25:41.645 "bdev_auto_examine": true, 00:25:41.645 "iobuf_small_cache_size": 128, 00:25:41.645 "iobuf_large_cache_size": 16 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "bdev_raid_set_options", 00:25:41.645 "params": { 00:25:41.645 "process_window_size_kb": 1024, 00:25:41.645 "process_max_bandwidth_mb_sec": 0 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "bdev_iscsi_set_options", 00:25:41.645 "params": { 00:25:41.645 "timeout_sec": 30 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "bdev_nvme_set_options", 00:25:41.645 "params": { 00:25:41.645 "action_on_timeout": "none", 00:25:41.645 "timeout_us": 0, 00:25:41.645 "timeout_admin_us": 0, 00:25:41.645 "keep_alive_timeout_ms": 10000, 00:25:41.645 "arbitration_burst": 0, 00:25:41.645 "low_priority_weight": 0, 00:25:41.645 "medium_priority_weight": 0, 00:25:41.645 "high_priority_weight": 0, 00:25:41.645 "nvme_adminq_poll_period_us": 10000, 00:25:41.645 "nvme_ioq_poll_period_us": 0, 00:25:41.645 "io_queue_requests": 0, 00:25:41.645 "delay_cmd_submit": true, 00:25:41.645 "transport_retry_count": 4, 00:25:41.645 "bdev_retry_count": 3, 00:25:41.645 "transport_ack_timeout": 0, 00:25:41.645 "ctrlr_loss_timeout_sec": 0, 00:25:41.645 "reconnect_delay_sec": 0, 00:25:41.645 "fast_io_fail_timeout_sec": 0, 00:25:41.645 "disable_auto_failback": false, 00:25:41.645 "generate_uuids": false, 00:25:41.645 "transport_tos": 0, 00:25:41.645 "nvme_error_stat": false, 00:25:41.645 "rdma_srq_size": 0, 00:25:41.645 "io_path_stat": false, 00:25:41.645 "allow_accel_sequence": false, 00:25:41.645 "rdma_max_cq_size": 0, 00:25:41.645 "rdma_cm_event_timeout_ms": 0, 00:25:41.645 "dhchap_digests": [ 00:25:41.645 "sha256", 00:25:41.645 "sha384", 00:25:41.645 "sha512" 00:25:41.645 ], 00:25:41.645 "dhchap_dhgroups": [ 00:25:41.645 "null", 00:25:41.645 "ffdhe2048", 00:25:41.645 "ffdhe3072", 00:25:41.645 "ffdhe4096", 00:25:41.645 "ffdhe6144", 00:25:41.645 "ffdhe8192" 00:25:41.645 ] 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "bdev_nvme_set_hotplug", 00:25:41.645 "params": { 00:25:41.645 "period_us": 100000, 00:25:41.645 "enable": false 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "bdev_malloc_create", 00:25:41.645 "params": { 00:25:41.645 "name": "malloc0", 00:25:41.645 "num_blocks": 8192, 00:25:41.645 "block_size": 4096, 00:25:41.645 "physical_block_size": 4096, 00:25:41.645 "uuid": "bfce56a5-dfb1-4493-b52b-9e541fcb34ff", 00:25:41.645 "optimal_io_boundary": 0, 00:25:41.645 "md_size": 0, 00:25:41.645 "dif_type": 0, 00:25:41.645 "dif_is_head_of_md": false, 00:25:41.645 "dif_pi_format": 0 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "bdev_wait_for_examine" 00:25:41.645 } 00:25:41.645 ] 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "subsystem": "nbd", 00:25:41.645 "config": [] 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "subsystem": "scheduler", 00:25:41.645 "config": [ 00:25:41.645 { 00:25:41.645 "method": "framework_set_scheduler", 00:25:41.645 "params": { 00:25:41.645 "name": "static" 00:25:41.645 } 00:25:41.645 } 00:25:41.645 ] 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "subsystem": "nvmf", 00:25:41.645 "config": [ 00:25:41.645 { 00:25:41.645 "method": "nvmf_set_config", 00:25:41.645 "params": { 00:25:41.645 "discovery_filter": "match_any", 00:25:41.645 "admin_cmd_passthru": { 00:25:41.645 "identify_ctrlr": false 00:25:41.645 }, 00:25:41.645 "dhchap_digests": [ 00:25:41.645 "sha256", 00:25:41.645 "sha384", 00:25:41.645 "sha512" 00:25:41.645 ], 00:25:41.645 "dhchap_dhgroups": [ 00:25:41.645 "null", 00:25:41.645 "ffdhe2048", 00:25:41.645 "ffdhe3072", 00:25:41.645 "ffdhe4096", 00:25:41.645 "ffdhe6144", 00:25:41.645 "ffdhe8192" 00:25:41.645 ] 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "nvmf_set_max_subsystems", 00:25:41.645 "params": { 00:25:41.645 "max_subsystems": 1024 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "nvmf_set_crdt", 00:25:41.645 "params": { 00:25:41.645 "crdt1": 0, 00:25:41.645 "crdt2": 0, 00:25:41.645 "crdt3": 0 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "nvmf_create_transport", 00:25:41.645 "params": { 00:25:41.645 "trtype": "TCP", 00:25:41.645 "max_queue_depth": 128, 00:25:41.645 "max_io_qpairs_per_ctrlr": 127, 00:25:41.645 "in_capsule_data_size": 4096, 00:25:41.645 "max_io_size": 131072, 00:25:41.645 "io_unit_size": 131072, 00:25:41.645 "max_aq_depth": 128, 00:25:41.645 "num_shared_buffers": 511, 00:25:41.645 "buf_cache_size": 4294967295, 00:25:41.645 "dif_insert_or_strip": false, 00:25:41.645 "zcopy": false, 00:25:41.645 "c2h_success": false, 00:25:41.645 "sock_priority": 0, 00:25:41.645 "abort_timeout_sec": 1, 00:25:41.645 "ack_timeout": 0, 00:25:41.645 "data_wr_pool_size": 0 00:25:41.645 } 00:25:41.645 }, 00:25:41.645 { 00:25:41.645 "method": "nvmf_create_subsystem", 00:25:41.645 "params": { 00:25:41.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.645 "allow_any_host": false, 00:25:41.645 "serial_number": "SPDK00000000000001", 00:25:41.645 "model_number": "SPDK bdev Controller", 00:25:41.645 "max_namespaces": 10, 00:25:41.645 "min_cntlid": 1, 00:25:41.645 "max_cntlid": 65519, 00:25:41.646 "ana_reporting": false 00:25:41.646 } 00:25:41.646 }, 00:25:41.646 { 00:25:41.646 "method": "nvmf_subsystem_add_host", 00:25:41.646 "params": { 00:25:41.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.646 "host": "nqn.2016-06.io.spdk:host1", 00:25:41.646 "psk": "key0" 00:25:41.646 } 00:25:41.646 }, 00:25:41.646 { 00:25:41.646 "method": "nvmf_subsystem_add_ns", 00:25:41.646 "params": { 00:25:41.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.646 "namespace": { 00:25:41.646 "nsid": 1, 00:25:41.646 "bdev_name": "malloc0", 00:25:41.646 "nguid": "BFCE56A5DFB14493B52B9E541FCB34FF", 00:25:41.646 "uuid": "bfce56a5-dfb1-4493-b52b-9e541fcb34ff", 00:25:41.646 "no_auto_visible": false 00:25:41.646 } 00:25:41.646 } 00:25:41.646 }, 00:25:41.646 { 00:25:41.646 "method": "nvmf_subsystem_add_listener", 00:25:41.646 "params": { 00:25:41.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.646 "listen_address": { 00:25:41.646 "trtype": "TCP", 00:25:41.646 "adrfam": "IPv4", 00:25:41.646 "traddr": "10.0.0.2", 00:25:41.646 "trsvcid": "4420" 00:25:41.646 }, 00:25:41.646 "secure_channel": true 00:25:41.646 } 00:25:41.646 } 00:25:41.646 ] 00:25:41.646 } 00:25:41.646 ] 00:25:41.646 }' 00:25:41.907 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3451599 00:25:41.907 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3451599 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451599 ']' 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.908 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.908 [2024-11-28 12:56:11.826947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:41.908 [2024-11-28 12:56:11.827002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.908 [2024-11-28 12:56:11.967144] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:41.908 [2024-11-28 12:56:12.020410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.168 [2024-11-28 12:56:12.041239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.168 [2024-11-28 12:56:12.041274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.169 [2024-11-28 12:56:12.041280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.169 [2024-11-28 12:56:12.041285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.169 [2024-11-28 12:56:12.041290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.169 [2024-11-28 12:56:12.041868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.169 [2024-11-28 12:56:12.230672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.169 [2024-11-28 12:56:12.262618] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:42.169 [2024-11-28 12:56:12.262819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3451868 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3451868 /var/tmp/bdevperf.sock 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451868 ']' 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:42.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:42.740 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:42.740 "subsystems": [ 00:25:42.740 { 00:25:42.740 "subsystem": "keyring", 00:25:42.740 "config": [ 00:25:42.740 { 00:25:42.740 "method": "keyring_file_add_key", 00:25:42.740 "params": { 00:25:42.740 "name": "key0", 00:25:42.740 "path": "/tmp/tmp.d3G3e1sSNa" 00:25:42.740 } 00:25:42.740 } 00:25:42.740 ] 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "subsystem": "iobuf", 00:25:42.740 "config": [ 00:25:42.740 { 00:25:42.740 "method": "iobuf_set_options", 00:25:42.740 "params": { 00:25:42.740 "small_pool_count": 8192, 00:25:42.740 "large_pool_count": 1024, 00:25:42.740 "small_bufsize": 8192, 00:25:42.740 "large_bufsize": 135168, 00:25:42.740 "enable_numa": false 00:25:42.740 } 00:25:42.740 } 00:25:42.740 ] 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "subsystem": "sock", 00:25:42.740 "config": [ 00:25:42.740 { 00:25:42.740 "method": "sock_set_default_impl", 00:25:42.740 "params": { 00:25:42.740 "impl_name": "posix" 00:25:42.740 } 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "method": "sock_impl_set_options", 00:25:42.740 "params": { 00:25:42.740 "impl_name": "ssl", 00:25:42.740 "recv_buf_size": 4096, 00:25:42.740 "send_buf_size": 4096, 00:25:42.740 "enable_recv_pipe": true, 00:25:42.740 "enable_quickack": false, 00:25:42.740 "enable_placement_id": 0, 00:25:42.740 "enable_zerocopy_send_server": true, 00:25:42.740 "enable_zerocopy_send_client": false, 00:25:42.740 "zerocopy_threshold": 0, 00:25:42.740 "tls_version": 0, 00:25:42.740 "enable_ktls": false 00:25:42.740 } 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "method": "sock_impl_set_options", 00:25:42.740 "params": { 00:25:42.740 "impl_name": "posix", 00:25:42.740 "recv_buf_size": 2097152, 00:25:42.740 "send_buf_size": 2097152, 00:25:42.740 "enable_recv_pipe": true, 00:25:42.740 "enable_quickack": false, 00:25:42.740 "enable_placement_id": 0, 00:25:42.740 "enable_zerocopy_send_server": true, 00:25:42.740 "enable_zerocopy_send_client": false, 00:25:42.740 "zerocopy_threshold": 0, 00:25:42.740 "tls_version": 0, 00:25:42.740 "enable_ktls": false 00:25:42.740 } 00:25:42.740 } 00:25:42.740 ] 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "subsystem": "vmd", 00:25:42.740 "config": [] 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "subsystem": "accel", 00:25:42.740 "config": [ 00:25:42.740 { 00:25:42.740 "method": "accel_set_options", 00:25:42.740 "params": { 00:25:42.740 "small_cache_size": 128, 00:25:42.740 "large_cache_size": 16, 00:25:42.740 "task_count": 2048, 00:25:42.740 "sequence_count": 2048, 00:25:42.740 "buf_count": 2048 00:25:42.740 } 00:25:42.740 } 00:25:42.740 ] 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "subsystem": "bdev", 00:25:42.740 "config": [ 00:25:42.740 { 00:25:42.740 "method": "bdev_set_options", 00:25:42.740 "params": { 00:25:42.740 "bdev_io_pool_size": 65535, 00:25:42.740 "bdev_io_cache_size": 256, 00:25:42.740 "bdev_auto_examine": true, 00:25:42.740 "iobuf_small_cache_size": 128, 00:25:42.740 "iobuf_large_cache_size": 16 00:25:42.740 } 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "method": "bdev_raid_set_options", 00:25:42.740 "params": { 00:25:42.740 "process_window_size_kb": 1024, 00:25:42.740 "process_max_bandwidth_mb_sec": 0 00:25:42.740 } 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "method": "bdev_iscsi_set_options", 00:25:42.740 "params": { 00:25:42.740 "timeout_sec": 30 00:25:42.740 } 00:25:42.740 }, 00:25:42.740 { 00:25:42.740 "method": "bdev_nvme_set_options", 00:25:42.740 "params": { 00:25:42.740 "action_on_timeout": "none", 00:25:42.740 "timeout_us": 0, 00:25:42.740 "timeout_admin_us": 0, 00:25:42.740 "keep_alive_timeout_ms": 10000, 00:25:42.740 "arbitration_burst": 0, 00:25:42.740 "low_priority_weight": 0, 00:25:42.740 "medium_priority_weight": 0, 00:25:42.740 "high_priority_weight": 0, 00:25:42.740 "nvme_adminq_poll_period_us": 10000, 00:25:42.740 "nvme_ioq_poll_period_us": 0, 00:25:42.741 "io_queue_requests": 512, 00:25:42.741 "delay_cmd_submit": true, 00:25:42.741 "transport_retry_count": 4, 00:25:42.741 "bdev_retry_count": 3, 00:25:42.741 "transport_ack_timeout": 0, 00:25:42.741 "ctrlr_loss_timeout_sec": 0, 00:25:42.741 "reconnect_delay_sec": 0, 00:25:42.741 "fast_io_fail_timeout_sec": 0, 00:25:42.741 "disable_auto_failback": false, 00:25:42.741 "generate_uuids": false, 00:25:42.741 "transport_tos": 0, 00:25:42.741 "nvme_error_stat": false, 00:25:42.741 "rdma_srq_size": 0, 00:25:42.741 "io_path_stat": false, 00:25:42.741 "allow_accel_sequence": false, 00:25:42.741 "rdma_max_cq_size": 0, 00:25:42.741 "rdma_cm_event_timeout_ms": 0, 00:25:42.741 "dhchap_digests": [ 00:25:42.741 "sha256", 00:25:42.741 "sha384", 00:25:42.741 "sha512" 00:25:42.741 ], 00:25:42.741 "dhchap_dhgroups": [ 00:25:42.741 "null", 00:25:42.741 "ffdhe2048", 00:25:42.741 "ffdhe3072", 00:25:42.741 "ffdhe4096", 00:25:42.741 "ffdhe6144", 00:25:42.741 "ffdhe8192" 00:25:42.741 ] 00:25:42.741 } 00:25:42.741 }, 00:25:42.741 { 00:25:42.741 "method": "bdev_nvme_attach_controller", 00:25:42.741 "params": { 00:25:42.741 "name": "TLSTEST", 00:25:42.741 "trtype": "TCP", 00:25:42.741 "adrfam": "IPv4", 00:25:42.741 "traddr": "10.0.0.2", 00:25:42.741 "trsvcid": "4420", 00:25:42.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.741 "prchk_reftag": false, 00:25:42.741 "prchk_guard": false, 00:25:42.741 "ctrlr_loss_timeout_sec": 0, 00:25:42.741 "reconnect_delay_sec": 0, 00:25:42.741 "fast_io_fail_timeout_sec": 0, 00:25:42.741 "psk": "key0", 00:25:42.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.741 "hdgst": false, 00:25:42.741 "ddgst": false, 00:25:42.741 "multipath": "multipath" 00:25:42.741 } 00:25:42.741 }, 00:25:42.741 { 00:25:42.741 "method": "bdev_nvme_set_hotplug", 00:25:42.741 "params": { 00:25:42.741 "period_us": 100000, 00:25:42.741 "enable": false 00:25:42.741 } 00:25:42.741 }, 00:25:42.741 { 00:25:42.741 "method": "bdev_wait_for_examine" 00:25:42.741 } 00:25:42.741 ] 00:25:42.741 }, 00:25:42.741 { 00:25:42.741 "subsystem": "nbd", 00:25:42.741 "config": [] 00:25:42.741 } 00:25:42.741 ] 00:25:42.741 }' 00:25:42.741 [2024-11-28 12:56:12.727601] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:42.741 [2024-11-28 12:56:12.727653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451868 ] 00:25:42.741 [2024-11-28 12:56:12.860286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:43.001 [2024-11-28 12:56:12.920373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.001 [2024-11-28 12:56:12.938114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.001 [2024-11-28 12:56:13.071945] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:43.573 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.573 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:43.573 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.573 Running I/O for 10 seconds... 00:25:45.895 4903.00 IOPS, 19.15 MiB/s [2024-11-28T11:56:16.964Z] 5278.50 IOPS, 20.62 MiB/s [2024-11-28T11:56:17.906Z] 5525.00 IOPS, 21.58 MiB/s [2024-11-28T11:56:18.847Z] 5472.50 IOPS, 21.38 MiB/s [2024-11-28T11:56:19.787Z] 5600.20 IOPS, 21.88 MiB/s [2024-11-28T11:56:20.729Z] 5558.00 IOPS, 21.71 MiB/s [2024-11-28T11:56:21.670Z] 5483.00 IOPS, 21.42 MiB/s [2024-11-28T11:56:23.053Z] 5485.50 IOPS, 21.43 MiB/s [2024-11-28T11:56:23.624Z] 5512.56 IOPS, 21.53 MiB/s [2024-11-28T11:56:23.885Z] 5547.80 IOPS, 21.67 MiB/s 00:25:53.758 Latency(us) 00:25:53.758 [2024-11-28T11:56:23.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.758 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:53.758 Verification LBA range: start 0x0 length 0x2000 00:25:53.758 TLSTESTn1 : 10.02 5551.39 21.69 0.00 0.00 23017.63 6158.37 31092.92 00:25:53.758 [2024-11-28T11:56:23.885Z] =================================================================================================================== 00:25:53.758 [2024-11-28T11:56:23.885Z] Total : 5551.39 21.69 0.00 0.00 23017.63 6158.37 31092.92 00:25:53.758 { 00:25:53.758 "results": [ 00:25:53.758 { 00:25:53.758 "job": "TLSTESTn1", 00:25:53.758 "core_mask": "0x4", 00:25:53.758 "workload": "verify", 00:25:53.758 "status": "finished", 00:25:53.758 "verify_range": { 00:25:53.758 "start": 0, 00:25:53.758 "length": 8192 00:25:53.758 }, 00:25:53.758 "queue_depth": 128, 00:25:53.758 "io_size": 4096, 00:25:53.758 "runtime": 10.016235, 00:25:53.758 "iops": 5551.387322681627, 00:25:53.758 "mibps": 21.685106729225105, 00:25:53.758 "io_failed": 0, 00:25:53.758 "io_timeout": 0, 00:25:53.758 "avg_latency_us": 23017.627861648645, 00:25:53.758 "min_latency_us": 6158.369528900768, 00:25:53.759 "max_latency_us": 31092.92348813899 00:25:53.759 } 00:25:53.759 ], 00:25:53.759 "core_count": 1 00:25:53.759 } 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3451868 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451868 ']' 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451868 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451868 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451868' 00:25:53.759 killing process with pid 3451868 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451868 00:25:53.759 Received shutdown signal, test time was about 10.000000 seconds 00:25:53.759 00:25:53.759 Latency(us) 00:25:53.759 [2024-11-28T11:56:23.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.759 [2024-11-28T11:56:23.886Z] =================================================================================================================== 00:25:53.759 [2024-11-28T11:56:23.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451868 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3451599 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451599 ']' 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451599 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.759 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451599 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451599' 00:25:54.020 killing process with pid 3451599 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451599 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451599 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.020 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3453977 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3453977 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3453977 ']' 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.020 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:54.020 [2024-11-28 12:56:24.057782] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:54.020 [2024-11-28 12:56:24.057839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.281 [2024-11-28 12:56:24.198092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:54.281 [2024-11-28 12:56:24.258032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.281 [2024-11-28 12:56:24.283568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.281 [2024-11-28 12:56:24.283611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.281 [2024-11-28 12:56:24.283619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.281 [2024-11-28 12:56:24.283626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.281 [2024-11-28 12:56:24.283633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.281 [2024-11-28 12:56:24.284395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.d3G3e1sSNa 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.d3G3e1sSNa 00:25:54.852 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:55.112 [2024-11-28 12:56:25.090303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.112 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:55.373 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:55.373 [2024-11-28 12:56:25.474369] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:55.373 [2024-11-28 12:56:25.474686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.634 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:55.634 malloc0 00:25:55.634 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:55.896 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:56.158 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3454558 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3454558 /var/tmp/bdevperf.sock 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3454558 ']' 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.419 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.419 [2024-11-28 12:56:26.358734] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:56.419 [2024-11-28 12:56:26.358811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454558 ] 00:25:56.419 [2024-11-28 12:56:26.495986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:56.680 [2024-11-28 12:56:26.551798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.680 [2024-11-28 12:56:26.570304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.251 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.251 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:57.251 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:25:57.251 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:57.512 [2024-11-28 12:56:27.481954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.512 nvme0n1 00:25:57.512 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:57.773 Running I/O for 1 seconds... 00:25:58.715 4018.00 IOPS, 15.70 MiB/s 00:25:58.715 Latency(us) 00:25:58.715 [2024-11-28T11:56:28.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.715 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:58.715 Verification LBA range: start 0x0 length 0x2000 00:25:58.715 nvme0n1 : 1.05 3957.01 15.46 0.00 0.00 31751.55 6076.26 50361.78 00:25:58.715 [2024-11-28T11:56:28.842Z] =================================================================================================================== 00:25:58.715 [2024-11-28T11:56:28.842Z] Total : 3957.01 15.46 0.00 0.00 31751.55 6076.26 50361.78 00:25:58.715 { 00:25:58.715 "results": [ 00:25:58.715 { 00:25:58.715 "job": "nvme0n1", 00:25:58.716 "core_mask": "0x2", 00:25:58.716 "workload": "verify", 00:25:58.716 "status": "finished", 00:25:58.716 "verify_range": { 00:25:58.716 "start": 0, 00:25:58.716 "length": 8192 00:25:58.716 }, 00:25:58.716 "queue_depth": 128, 00:25:58.716 "io_size": 4096, 00:25:58.716 "runtime": 1.047762, 00:25:58.716 "iops": 3957.00550315816, 00:25:58.716 "mibps": 15.457052746711563, 00:25:58.716 "io_failed": 0, 00:25:58.716 "io_timeout": 0, 00:25:58.716 "avg_latency_us": 31751.54587589727, 00:25:58.716 "min_latency_us": 6076.257935182091, 00:25:58.716 "max_latency_us": 50361.77748078851 00:25:58.716 } 00:25:58.716 ], 00:25:58.716 "core_count": 1 00:25:58.716 } 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3454558 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3454558 ']' 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3454558 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3454558 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3454558' 00:25:58.716 killing process with pid 3454558 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3454558 00:25:58.716 Received shutdown signal, test time was about 1.000000 seconds 00:25:58.716 00:25:58.716 Latency(us) 00:25:58.716 [2024-11-28T11:56:28.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.716 [2024-11-28T11:56:28.843Z] =================================================================================================================== 00:25:58.716 [2024-11-28T11:56:28.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.716 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3454558 00:25:58.976 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3453977 00:25:58.976 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3453977 ']' 00:25:58.976 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3453977 00:25:58.976 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:58.976 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.977 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453977 00:25:58.977 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.977 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.977 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453977' 00:25:58.977 killing process with pid 3453977 00:25:58.977 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3453977 00:25:58.977 12:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3453977 00:25:58.977 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:58.977 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.977 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.977 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3454947 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3454947 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3454947 ']' 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.238 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:59.238 [2024-11-28 12:56:29.161275] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:59.238 [2024-11-28 12:56:29.161340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.238 [2024-11-28 12:56:29.303038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:59.238 [2024-11-28 12:56:29.362033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.499 [2024-11-28 12:56:29.387819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.499 [2024-11-28 12:56:29.387862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.499 [2024-11-28 12:56:29.387870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.499 [2024-11-28 12:56:29.387877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.499 [2024-11-28 12:56:29.387884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.499 [2024-11-28 12:56:29.388592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.072 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.072 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:00.072 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.072 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.072 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.072 [2024-11-28 12:56:30.015093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.072 malloc0 00:26:00.072 [2024-11-28 12:56:30.045854] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:00.072 [2024-11-28 12:56:30.046206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3455284 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3455284 /var/tmp/bdevperf.sock 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3455284 ']' 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.072 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.072 [2024-11-28 12:56:30.129617] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:00.072 [2024-11-28 12:56:30.129695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455284 ] 00:26:00.332 [2024-11-28 12:56:30.266900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:00.332 [2024-11-28 12:56:30.319866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.332 [2024-11-28 12:56:30.338573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.903 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.903 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:00.903 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d3G3e1sSNa 00:26:01.163 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:01.163 [2024-11-28 12:56:31.250508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:01.423 nvme0n1 00:26:01.423 12:56:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.423 Running I/O for 1 seconds... 00:26:02.364 5506.00 IOPS, 21.51 MiB/s 00:26:02.364 Latency(us) 00:26:02.364 [2024-11-28T11:56:32.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.364 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:02.364 Verification LBA range: start 0x0 length 0x2000 00:26:02.364 nvme0n1 : 1.01 5555.56 21.70 0.00 0.00 22898.08 4543.51 26604.16 00:26:02.364 [2024-11-28T11:56:32.491Z] =================================================================================================================== 00:26:02.364 [2024-11-28T11:56:32.491Z] Total : 5555.56 21.70 0.00 0.00 22898.08 4543.51 26604.16 00:26:02.364 { 00:26:02.364 "results": [ 00:26:02.364 { 00:26:02.364 "job": "nvme0n1", 00:26:02.364 "core_mask": "0x2", 00:26:02.364 "workload": "verify", 00:26:02.364 "status": "finished", 00:26:02.364 "verify_range": { 00:26:02.364 "start": 0, 00:26:02.364 "length": 8192 00:26:02.364 }, 00:26:02.364 "queue_depth": 128, 00:26:02.364 "io_size": 4096, 00:26:02.364 "runtime": 1.014119, 00:26:02.364 "iops": 5555.561033764282, 00:26:02.364 "mibps": 21.701410288141727, 00:26:02.364 "io_failed": 0, 00:26:02.364 "io_timeout": 0, 00:26:02.364 "avg_latency_us": 22898.08247169084, 00:26:02.364 "min_latency_us": 4543.508185766789, 00:26:02.364 "max_latency_us": 26604.15636485132 00:26:02.364 } 00:26:02.364 ], 00:26:02.364 "core_count": 1 00:26:02.364 } 00:26:02.364 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:26:02.364 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.364 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.625 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.625 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:26:02.625 "subsystems": [ 00:26:02.625 { 00:26:02.625 "subsystem": "keyring", 00:26:02.625 "config": [ 00:26:02.625 { 00:26:02.625 "method": "keyring_file_add_key", 00:26:02.625 "params": { 00:26:02.625 "name": "key0", 00:26:02.625 "path": "/tmp/tmp.d3G3e1sSNa" 00:26:02.625 } 00:26:02.625 } 00:26:02.625 ] 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "subsystem": "iobuf", 00:26:02.625 "config": [ 00:26:02.625 { 00:26:02.625 "method": "iobuf_set_options", 00:26:02.625 "params": { 00:26:02.625 "small_pool_count": 8192, 00:26:02.625 "large_pool_count": 1024, 00:26:02.625 "small_bufsize": 8192, 00:26:02.625 "large_bufsize": 135168, 00:26:02.625 "enable_numa": false 00:26:02.625 } 00:26:02.625 } 00:26:02.625 ] 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "subsystem": "sock", 00:26:02.625 "config": [ 00:26:02.625 { 00:26:02.625 "method": "sock_set_default_impl", 00:26:02.625 "params": { 00:26:02.625 "impl_name": "posix" 00:26:02.625 } 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "method": "sock_impl_set_options", 00:26:02.625 "params": { 00:26:02.625 "impl_name": "ssl", 00:26:02.625 "recv_buf_size": 4096, 00:26:02.625 "send_buf_size": 4096, 00:26:02.625 "enable_recv_pipe": true, 00:26:02.625 "enable_quickack": false, 00:26:02.625 "enable_placement_id": 0, 00:26:02.625 "enable_zerocopy_send_server": true, 00:26:02.625 "enable_zerocopy_send_client": false, 00:26:02.625 "zerocopy_threshold": 0, 00:26:02.625 "tls_version": 0, 00:26:02.625 "enable_ktls": false 00:26:02.625 } 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "method": "sock_impl_set_options", 00:26:02.625 "params": { 00:26:02.625 "impl_name": "posix", 00:26:02.625 "recv_buf_size": 2097152, 00:26:02.625 "send_buf_size": 2097152, 00:26:02.625 "enable_recv_pipe": true, 00:26:02.625 "enable_quickack": false, 00:26:02.625 "enable_placement_id": 0, 00:26:02.625 "enable_zerocopy_send_server": true, 00:26:02.625 "enable_zerocopy_send_client": false, 00:26:02.625 "zerocopy_threshold": 0, 00:26:02.625 "tls_version": 0, 00:26:02.625 "enable_ktls": false 00:26:02.625 } 00:26:02.625 } 00:26:02.625 ] 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "subsystem": "vmd", 00:26:02.625 "config": [] 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "subsystem": "accel", 00:26:02.625 "config": [ 00:26:02.625 { 00:26:02.625 "method": "accel_set_options", 00:26:02.625 "params": { 00:26:02.625 "small_cache_size": 128, 00:26:02.625 "large_cache_size": 16, 00:26:02.625 "task_count": 2048, 00:26:02.625 "sequence_count": 2048, 00:26:02.625 "buf_count": 2048 00:26:02.625 } 00:26:02.625 } 00:26:02.625 ] 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "subsystem": "bdev", 00:26:02.625 "config": [ 00:26:02.625 { 00:26:02.625 "method": "bdev_set_options", 00:26:02.625 "params": { 00:26:02.625 "bdev_io_pool_size": 65535, 00:26:02.625 "bdev_io_cache_size": 256, 00:26:02.625 "bdev_auto_examine": true, 00:26:02.625 "iobuf_small_cache_size": 128, 00:26:02.625 "iobuf_large_cache_size": 16 00:26:02.625 } 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "method": "bdev_raid_set_options", 00:26:02.625 "params": { 00:26:02.625 "process_window_size_kb": 1024, 00:26:02.625 "process_max_bandwidth_mb_sec": 0 00:26:02.625 } 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "method": "bdev_iscsi_set_options", 00:26:02.625 "params": { 00:26:02.625 "timeout_sec": 30 00:26:02.625 } 00:26:02.625 }, 00:26:02.625 { 00:26:02.625 "method": "bdev_nvme_set_options", 00:26:02.625 "params": { 00:26:02.625 "action_on_timeout": "none", 00:26:02.625 "timeout_us": 0, 00:26:02.625 "timeout_admin_us": 0, 00:26:02.625 "keep_alive_timeout_ms": 10000, 00:26:02.625 "arbitration_burst": 0, 00:26:02.625 "low_priority_weight": 0, 00:26:02.625 "medium_priority_weight": 0, 00:26:02.625 "high_priority_weight": 0, 00:26:02.625 "nvme_adminq_poll_period_us": 10000, 00:26:02.625 "nvme_ioq_poll_period_us": 0, 00:26:02.625 "io_queue_requests": 0, 00:26:02.625 "delay_cmd_submit": true, 00:26:02.625 "transport_retry_count": 4, 00:26:02.625 "bdev_retry_count": 3, 00:26:02.625 "transport_ack_timeout": 0, 00:26:02.625 "ctrlr_loss_timeout_sec": 0, 00:26:02.625 "reconnect_delay_sec": 0, 00:26:02.625 "fast_io_fail_timeout_sec": 0, 00:26:02.625 "disable_auto_failback": false, 00:26:02.625 "generate_uuids": false, 00:26:02.625 "transport_tos": 0, 00:26:02.626 "nvme_error_stat": false, 00:26:02.626 "rdma_srq_size": 0, 00:26:02.626 "io_path_stat": false, 00:26:02.626 "allow_accel_sequence": false, 00:26:02.626 "rdma_max_cq_size": 0, 00:26:02.626 "rdma_cm_event_timeout_ms": 0, 00:26:02.626 "dhchap_digests": [ 00:26:02.626 "sha256", 00:26:02.626 "sha384", 00:26:02.626 "sha512" 00:26:02.626 ], 00:26:02.626 "dhchap_dhgroups": [ 00:26:02.626 "null", 00:26:02.626 "ffdhe2048", 00:26:02.626 "ffdhe3072", 00:26:02.626 "ffdhe4096", 00:26:02.626 "ffdhe6144", 00:26:02.626 "ffdhe8192" 00:26:02.626 ] 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "bdev_nvme_set_hotplug", 00:26:02.626 "params": { 00:26:02.626 "period_us": 100000, 00:26:02.626 "enable": false 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "bdev_malloc_create", 00:26:02.626 "params": { 00:26:02.626 "name": "malloc0", 00:26:02.626 "num_blocks": 8192, 00:26:02.626 "block_size": 4096, 00:26:02.626 "physical_block_size": 4096, 00:26:02.626 "uuid": "bab897fb-b0b1-4d3e-8361-7c37e3d1ff96", 00:26:02.626 "optimal_io_boundary": 0, 00:26:02.626 "md_size": 0, 00:26:02.626 "dif_type": 0, 00:26:02.626 "dif_is_head_of_md": false, 00:26:02.626 "dif_pi_format": 0 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "bdev_wait_for_examine" 00:26:02.626 } 00:26:02.626 ] 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "subsystem": "nbd", 00:26:02.626 "config": [] 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "subsystem": "scheduler", 00:26:02.626 "config": [ 00:26:02.626 { 00:26:02.626 "method": "framework_set_scheduler", 00:26:02.626 "params": { 00:26:02.626 "name": "static" 00:26:02.626 } 00:26:02.626 } 00:26:02.626 ] 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "subsystem": "nvmf", 00:26:02.626 "config": [ 00:26:02.626 { 00:26:02.626 "method": "nvmf_set_config", 00:26:02.626 "params": { 00:26:02.626 "discovery_filter": "match_any", 00:26:02.626 "admin_cmd_passthru": { 00:26:02.626 "identify_ctrlr": false 00:26:02.626 }, 00:26:02.626 "dhchap_digests": [ 00:26:02.626 "sha256", 00:26:02.626 "sha384", 00:26:02.626 "sha512" 00:26:02.626 ], 00:26:02.626 "dhchap_dhgroups": [ 00:26:02.626 "null", 00:26:02.626 "ffdhe2048", 00:26:02.626 "ffdhe3072", 00:26:02.626 "ffdhe4096", 00:26:02.626 "ffdhe6144", 00:26:02.626 "ffdhe8192" 00:26:02.626 ] 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_set_max_subsystems", 00:26:02.626 "params": { 00:26:02.626 "max_subsystems": 1024 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_set_crdt", 00:26:02.626 "params": { 00:26:02.626 "crdt1": 0, 00:26:02.626 "crdt2": 0, 00:26:02.626 "crdt3": 0 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_create_transport", 00:26:02.626 "params": { 00:26:02.626 "trtype": "TCP", 00:26:02.626 "max_queue_depth": 128, 00:26:02.626 "max_io_qpairs_per_ctrlr": 127, 00:26:02.626 "in_capsule_data_size": 4096, 00:26:02.626 "max_io_size": 131072, 00:26:02.626 "io_unit_size": 131072, 00:26:02.626 "max_aq_depth": 128, 00:26:02.626 "num_shared_buffers": 511, 00:26:02.626 "buf_cache_size": 4294967295, 00:26:02.626 "dif_insert_or_strip": false, 00:26:02.626 "zcopy": false, 00:26:02.626 "c2h_success": false, 00:26:02.626 "sock_priority": 0, 00:26:02.626 "abort_timeout_sec": 1, 00:26:02.626 "ack_timeout": 0, 00:26:02.626 "data_wr_pool_size": 0 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_create_subsystem", 00:26:02.626 "params": { 00:26:02.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.626 "allow_any_host": false, 00:26:02.626 "serial_number": "00000000000000000000", 00:26:02.626 "model_number": "SPDK bdev Controller", 00:26:02.626 "max_namespaces": 32, 00:26:02.626 "min_cntlid": 1, 00:26:02.626 "max_cntlid": 65519, 00:26:02.626 "ana_reporting": false 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_subsystem_add_host", 00:26:02.626 "params": { 00:26:02.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.626 "host": "nqn.2016-06.io.spdk:host1", 00:26:02.626 "psk": "key0" 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_subsystem_add_ns", 00:26:02.626 "params": { 00:26:02.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.626 "namespace": { 00:26:02.626 "nsid": 1, 00:26:02.626 "bdev_name": "malloc0", 00:26:02.626 "nguid": "BAB897FBB0B14D3E83617C37E3D1FF96", 00:26:02.626 "uuid": "bab897fb-b0b1-4d3e-8361-7c37e3d1ff96", 00:26:02.626 "no_auto_visible": false 00:26:02.626 } 00:26:02.626 } 00:26:02.626 }, 00:26:02.626 { 00:26:02.626 "method": "nvmf_subsystem_add_listener", 00:26:02.626 "params": { 00:26:02.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.626 "listen_address": { 00:26:02.626 "trtype": "TCP", 00:26:02.626 "adrfam": "IPv4", 00:26:02.626 "traddr": "10.0.0.2", 00:26:02.626 "trsvcid": "4420" 00:26:02.626 }, 00:26:02.626 "secure_channel": false, 00:26:02.626 "sock_impl": "ssl" 00:26:02.626 } 00:26:02.626 } 00:26:02.626 ] 00:26:02.626 } 00:26:02.626 ] 00:26:02.626 }' 00:26:02.626 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:02.887 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:26:02.887 "subsystems": [ 00:26:02.887 { 00:26:02.887 "subsystem": "keyring", 00:26:02.887 "config": [ 00:26:02.887 { 00:26:02.887 "method": "keyring_file_add_key", 00:26:02.887 "params": { 00:26:02.887 "name": "key0", 00:26:02.887 "path": "/tmp/tmp.d3G3e1sSNa" 00:26:02.887 } 00:26:02.887 } 00:26:02.887 ] 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "subsystem": "iobuf", 00:26:02.887 "config": [ 00:26:02.887 { 00:26:02.887 "method": "iobuf_set_options", 00:26:02.887 "params": { 00:26:02.887 "small_pool_count": 8192, 00:26:02.887 "large_pool_count": 1024, 00:26:02.887 "small_bufsize": 8192, 00:26:02.887 "large_bufsize": 135168, 00:26:02.887 "enable_numa": false 00:26:02.887 } 00:26:02.887 } 00:26:02.887 ] 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "subsystem": "sock", 00:26:02.887 "config": [ 00:26:02.887 { 00:26:02.887 "method": "sock_set_default_impl", 00:26:02.887 "params": { 00:26:02.887 "impl_name": "posix" 00:26:02.887 } 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "method": "sock_impl_set_options", 00:26:02.887 "params": { 00:26:02.887 "impl_name": "ssl", 00:26:02.887 "recv_buf_size": 4096, 00:26:02.887 "send_buf_size": 4096, 00:26:02.887 "enable_recv_pipe": true, 00:26:02.887 "enable_quickack": false, 00:26:02.887 "enable_placement_id": 0, 00:26:02.887 "enable_zerocopy_send_server": true, 00:26:02.887 "enable_zerocopy_send_client": false, 00:26:02.887 "zerocopy_threshold": 0, 00:26:02.887 "tls_version": 0, 00:26:02.887 "enable_ktls": false 00:26:02.887 } 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "method": "sock_impl_set_options", 00:26:02.887 "params": { 00:26:02.887 "impl_name": "posix", 00:26:02.887 "recv_buf_size": 2097152, 00:26:02.887 "send_buf_size": 2097152, 00:26:02.887 "enable_recv_pipe": true, 00:26:02.887 "enable_quickack": false, 00:26:02.887 "enable_placement_id": 0, 00:26:02.887 "enable_zerocopy_send_server": true, 00:26:02.887 "enable_zerocopy_send_client": false, 00:26:02.887 "zerocopy_threshold": 0, 00:26:02.887 "tls_version": 0, 00:26:02.887 "enable_ktls": false 00:26:02.887 } 00:26:02.887 } 00:26:02.887 ] 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "subsystem": "vmd", 00:26:02.887 "config": [] 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "subsystem": "accel", 00:26:02.887 "config": [ 00:26:02.887 { 00:26:02.887 "method": "accel_set_options", 00:26:02.887 "params": { 00:26:02.887 "small_cache_size": 128, 00:26:02.887 "large_cache_size": 16, 00:26:02.887 "task_count": 2048, 00:26:02.887 "sequence_count": 2048, 00:26:02.887 "buf_count": 2048 00:26:02.887 } 00:26:02.887 } 00:26:02.887 ] 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "subsystem": "bdev", 00:26:02.887 "config": [ 00:26:02.887 { 00:26:02.887 "method": "bdev_set_options", 00:26:02.887 "params": { 00:26:02.887 "bdev_io_pool_size": 65535, 00:26:02.887 "bdev_io_cache_size": 256, 00:26:02.887 "bdev_auto_examine": true, 00:26:02.887 "iobuf_small_cache_size": 128, 00:26:02.887 "iobuf_large_cache_size": 16 00:26:02.887 } 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "method": "bdev_raid_set_options", 00:26:02.887 "params": { 00:26:02.887 "process_window_size_kb": 1024, 00:26:02.887 "process_max_bandwidth_mb_sec": 0 00:26:02.887 } 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "method": "bdev_iscsi_set_options", 00:26:02.887 "params": { 00:26:02.887 "timeout_sec": 30 00:26:02.887 } 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "method": "bdev_nvme_set_options", 00:26:02.887 "params": { 00:26:02.887 "action_on_timeout": "none", 00:26:02.887 "timeout_us": 0, 00:26:02.887 "timeout_admin_us": 0, 00:26:02.887 "keep_alive_timeout_ms": 10000, 00:26:02.887 "arbitration_burst": 0, 00:26:02.887 "low_priority_weight": 0, 00:26:02.887 "medium_priority_weight": 0, 00:26:02.887 "high_priority_weight": 0, 00:26:02.887 "nvme_adminq_poll_period_us": 10000, 00:26:02.887 "nvme_ioq_poll_period_us": 0, 00:26:02.887 "io_queue_requests": 512, 00:26:02.887 "delay_cmd_submit": true, 00:26:02.887 "transport_retry_count": 4, 00:26:02.887 "bdev_retry_count": 3, 00:26:02.887 "transport_ack_timeout": 0, 00:26:02.887 "ctrlr_loss_timeout_sec": 0, 00:26:02.887 "reconnect_delay_sec": 0, 00:26:02.887 "fast_io_fail_timeout_sec": 0, 00:26:02.887 "disable_auto_failback": false, 00:26:02.887 "generate_uuids": false, 00:26:02.887 "transport_tos": 0, 00:26:02.887 "nvme_error_stat": false, 00:26:02.887 "rdma_srq_size": 0, 00:26:02.887 "io_path_stat": false, 00:26:02.887 "allow_accel_sequence": false, 00:26:02.887 "rdma_max_cq_size": 0, 00:26:02.887 "rdma_cm_event_timeout_ms": 0, 00:26:02.887 "dhchap_digests": [ 00:26:02.887 "sha256", 00:26:02.887 "sha384", 00:26:02.887 "sha512" 00:26:02.887 ], 00:26:02.887 "dhchap_dhgroups": [ 00:26:02.887 "null", 00:26:02.887 "ffdhe2048", 00:26:02.887 "ffdhe3072", 00:26:02.887 "ffdhe4096", 00:26:02.887 "ffdhe6144", 00:26:02.887 "ffdhe8192" 00:26:02.887 ] 00:26:02.887 } 00:26:02.887 }, 00:26:02.887 { 00:26:02.887 "method": "bdev_nvme_attach_controller", 00:26:02.887 "params": { 00:26:02.887 "name": "nvme0", 00:26:02.887 "trtype": "TCP", 00:26:02.887 "adrfam": "IPv4", 00:26:02.887 "traddr": "10.0.0.2", 00:26:02.887 "trsvcid": "4420", 00:26:02.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.887 "prchk_reftag": false, 00:26:02.887 "prchk_guard": false, 00:26:02.887 "ctrlr_loss_timeout_sec": 0, 00:26:02.888 "reconnect_delay_sec": 0, 00:26:02.888 "fast_io_fail_timeout_sec": 0, 00:26:02.888 "psk": "key0", 00:26:02.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.888 "hdgst": false, 00:26:02.888 "ddgst": false, 00:26:02.888 "multipath": "multipath" 00:26:02.888 } 00:26:02.888 }, 00:26:02.888 { 00:26:02.888 "method": "bdev_nvme_set_hotplug", 00:26:02.888 "params": { 00:26:02.888 "period_us": 100000, 00:26:02.888 "enable": false 00:26:02.888 } 00:26:02.888 }, 00:26:02.888 { 00:26:02.888 "method": "bdev_enable_histogram", 00:26:02.888 "params": { 00:26:02.888 "name": "nvme0n1", 00:26:02.888 "enable": true 00:26:02.888 } 00:26:02.888 }, 00:26:02.888 { 00:26:02.888 "method": "bdev_wait_for_examine" 00:26:02.888 } 00:26:02.888 ] 00:26:02.888 }, 00:26:02.888 { 00:26:02.888 "subsystem": "nbd", 00:26:02.888 "config": [] 00:26:02.888 } 00:26:02.888 ] 00:26:02.888 }' 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3455284 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3455284 ']' 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3455284 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455284 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455284' 00:26:02.888 killing process with pid 3455284 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3455284 00:26:02.888 Received shutdown signal, test time was about 1.000000 seconds 00:26:02.888 00:26:02.888 Latency(us) 00:26:02.888 [2024-11-28T11:56:33.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.888 [2024-11-28T11:56:33.015Z] =================================================================================================================== 00:26:02.888 [2024-11-28T11:56:33.015Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3455284 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3454947 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3454947 ']' 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3454947 00:26:02.888 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:02.888 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.888 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3454947 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3454947' 00:26:03.149 killing process with pid 3454947 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3454947 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3454947 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.149 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:26:03.149 "subsystems": [ 00:26:03.149 { 00:26:03.149 "subsystem": "keyring", 00:26:03.149 "config": [ 00:26:03.149 { 00:26:03.149 "method": "keyring_file_add_key", 00:26:03.149 "params": { 00:26:03.149 "name": "key0", 00:26:03.149 "path": "/tmp/tmp.d3G3e1sSNa" 00:26:03.149 } 00:26:03.149 } 00:26:03.149 ] 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "subsystem": "iobuf", 00:26:03.149 "config": [ 00:26:03.149 { 00:26:03.149 "method": "iobuf_set_options", 00:26:03.149 "params": { 00:26:03.149 "small_pool_count": 8192, 00:26:03.149 "large_pool_count": 1024, 00:26:03.149 "small_bufsize": 8192, 00:26:03.149 "large_bufsize": 135168, 00:26:03.149 "enable_numa": false 00:26:03.149 } 00:26:03.149 } 00:26:03.149 ] 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "subsystem": "sock", 00:26:03.149 "config": [ 00:26:03.149 { 00:26:03.149 "method": "sock_set_default_impl", 00:26:03.149 "params": { 00:26:03.149 "impl_name": "posix" 00:26:03.149 } 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "method": "sock_impl_set_options", 00:26:03.149 "params": { 00:26:03.149 "impl_name": "ssl", 00:26:03.149 "recv_buf_size": 4096, 00:26:03.149 "send_buf_size": 4096, 00:26:03.149 "enable_recv_pipe": true, 00:26:03.149 "enable_quickack": false, 00:26:03.149 "enable_placement_id": 0, 00:26:03.149 "enable_zerocopy_send_server": true, 00:26:03.149 "enable_zerocopy_send_client": false, 00:26:03.149 "zerocopy_threshold": 0, 00:26:03.149 "tls_version": 0, 00:26:03.149 "enable_ktls": false 00:26:03.149 } 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "method": "sock_impl_set_options", 00:26:03.149 "params": { 00:26:03.149 "impl_name": "posix", 00:26:03.149 "recv_buf_size": 2097152, 00:26:03.149 "send_buf_size": 2097152, 00:26:03.149 "enable_recv_pipe": true, 00:26:03.149 "enable_quickack": false, 00:26:03.149 "enable_placement_id": 0, 00:26:03.149 "enable_zerocopy_send_server": true, 00:26:03.149 "enable_zerocopy_send_client": false, 00:26:03.149 "zerocopy_threshold": 0, 00:26:03.149 "tls_version": 0, 00:26:03.149 "enable_ktls": false 00:26:03.149 } 00:26:03.149 } 00:26:03.149 ] 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "subsystem": "vmd", 00:26:03.149 "config": [] 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "subsystem": "accel", 00:26:03.149 "config": [ 00:26:03.149 { 00:26:03.149 "method": "accel_set_options", 00:26:03.149 "params": { 00:26:03.149 "small_cache_size": 128, 00:26:03.149 "large_cache_size": 16, 00:26:03.149 "task_count": 2048, 00:26:03.149 "sequence_count": 2048, 00:26:03.149 "buf_count": 2048 00:26:03.149 } 00:26:03.149 } 00:26:03.149 ] 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "subsystem": "bdev", 00:26:03.149 "config": [ 00:26:03.149 { 00:26:03.149 "method": "bdev_set_options", 00:26:03.149 "params": { 00:26:03.149 "bdev_io_pool_size": 65535, 00:26:03.149 "bdev_io_cache_size": 256, 00:26:03.149 "bdev_auto_examine": true, 00:26:03.149 "iobuf_small_cache_size": 128, 00:26:03.149 "iobuf_large_cache_size": 16 00:26:03.149 } 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "method": "bdev_raid_set_options", 00:26:03.149 "params": { 00:26:03.149 "process_window_size_kb": 1024, 00:26:03.149 "process_max_bandwidth_mb_sec": 0 00:26:03.149 } 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "method": "bdev_iscsi_set_options", 00:26:03.149 "params": { 00:26:03.149 "timeout_sec": 30 00:26:03.149 } 00:26:03.149 }, 00:26:03.149 { 00:26:03.149 "method": "bdev_nvme_set_options", 00:26:03.149 "params": { 00:26:03.149 "action_on_timeout": "none", 00:26:03.149 "timeout_us": 0, 00:26:03.149 "timeout_admin_us": 0, 00:26:03.149 "keep_alive_timeout_ms": 10000, 00:26:03.149 "arbitration_burst": 0, 00:26:03.149 "low_priority_weight": 0, 00:26:03.149 "medium_priority_weight": 0, 00:26:03.149 "high_priority_weight": 0, 00:26:03.149 "nvme_adminq_poll_period_us": 10000, 00:26:03.149 "nvme_ioq_poll_period_us": 0, 00:26:03.149 "io_queue_requests": 0, 00:26:03.149 "delay_cmd_submit": true, 00:26:03.149 "transport_retry_count": 4, 00:26:03.149 "bdev_retry_count": 3, 00:26:03.149 "transport_ack_timeout": 0, 00:26:03.149 "ctrlr_loss_timeout_sec": 0, 00:26:03.149 "reconnect_delay_sec": 0, 00:26:03.149 "fast_io_fail_timeout_sec": 0, 00:26:03.149 "disable_auto_failback": false, 00:26:03.149 "generate_uuids": false, 00:26:03.149 "transport_tos": 0, 00:26:03.150 "nvme_error_stat": false, 00:26:03.150 "rdma_srq_size": 0, 00:26:03.150 "io_path_stat": false, 00:26:03.150 "allow_accel_sequence": false, 00:26:03.150 "rdma_max_cq_size": 0, 00:26:03.150 "rdma_cm_event_timeout_ms": 0, 00:26:03.150 "dhchap_digests": [ 00:26:03.150 "sha256", 00:26:03.150 "sha384", 00:26:03.150 "sha512" 00:26:03.150 ], 00:26:03.150 "dhchap_dhgroups": [ 00:26:03.150 "null", 00:26:03.150 "ffdhe2048", 00:26:03.150 "ffdhe3072", 00:26:03.150 "ffdhe4096", 00:26:03.150 "ffdhe6144", 00:26:03.150 "ffdhe8192" 00:26:03.150 ] 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "bdev_nvme_set_hotplug", 00:26:03.150 "params": { 00:26:03.150 "period_us": 100000, 00:26:03.150 "enable": false 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "bdev_malloc_create", 00:26:03.150 "params": { 00:26:03.150 "name": "malloc0", 00:26:03.150 "num_blocks": 8192, 00:26:03.150 "block_size": 4096, 00:26:03.150 "physical_block_size": 4096, 00:26:03.150 "uuid": "bab897fb-b0b1-4d3e-8361-7c37e3d1ff96", 00:26:03.150 "optimal_io_boundary": 0, 00:26:03.150 "md_size": 0, 00:26:03.150 "dif_type": 0, 00:26:03.150 "dif_is_head_of_md": false, 00:26:03.150 "dif_pi_format": 0 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "bdev_wait_for_examine" 00:26:03.150 } 00:26:03.150 ] 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "subsystem": "nbd", 00:26:03.150 "config": [] 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "subsystem": "scheduler", 00:26:03.150 "config": [ 00:26:03.150 { 00:26:03.150 "method": "framework_set_scheduler", 00:26:03.150 "params": { 00:26:03.150 "name": "static" 00:26:03.150 } 00:26:03.150 } 00:26:03.150 ] 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "subsystem": "nvmf", 00:26:03.150 "config": [ 00:26:03.150 { 00:26:03.150 "method": "nvmf_set_config", 00:26:03.150 "params": { 00:26:03.150 "discovery_filter": "match_any", 00:26:03.150 "admin_cmd_passthru": { 00:26:03.150 "identify_ctrlr": false 00:26:03.150 }, 00:26:03.150 "dhchap_digests": [ 00:26:03.150 "sha256", 00:26:03.150 "sha384", 00:26:03.150 "sha512" 00:26:03.150 ], 00:26:03.150 "dhchap_dhgroups": [ 00:26:03.150 "null", 00:26:03.150 "ffdhe2048", 00:26:03.150 "ffdhe3072", 00:26:03.150 "ffdhe4096", 00:26:03.150 "ffdhe6144", 00:26:03.150 "ffdhe8192" 00:26:03.150 ] 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_set_max_subsystems", 00:26:03.150 "params": { 00:26:03.150 "max_subsystems": 1024 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_set_crdt", 00:26:03.150 "params": { 00:26:03.150 "crdt1": 0, 00:26:03.150 "crdt2": 0, 00:26:03.150 "crdt3": 0 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_create_transport", 00:26:03.150 "params": { 00:26:03.150 "trtype": "TCP", 00:26:03.150 "max_queue_depth": 128, 00:26:03.150 "max_io_qpairs_per_ctrlr": 127, 00:26:03.150 "in_capsule_data_size": 4096, 00:26:03.150 "max_io_size": 131072, 00:26:03.150 "io_unit_size": 131072, 00:26:03.150 "max_aq_depth": 128, 00:26:03.150 "num_shared_buffers": 511, 00:26:03.150 "buf_cache_size": 4294967295, 00:26:03.150 "dif_insert_or_strip": false, 00:26:03.150 "zcopy": false, 00:26:03.150 "c2h_success": false, 00:26:03.150 "sock_priority": 0, 00:26:03.150 "abort_timeout_sec": 1, 00:26:03.150 "ack_timeout": 0, 00:26:03.150 "data_wr_pool_size": 0 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_create_subsystem", 00:26:03.150 "params": { 00:26:03.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.150 "allow_any_host": false, 00:26:03.150 "serial_number": "00000000000000000000", 00:26:03.150 "model_number": "SPDK bdev Controller", 00:26:03.150 "max_namespaces": 32, 00:26:03.150 "min_cntlid": 1, 00:26:03.150 "max_cntlid": 65519, 00:26:03.150 "ana_reporting": false 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_subsystem_add_host", 00:26:03.150 "params": { 00:26:03.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.150 "host": "nqn.2016-06.io.spdk:host1", 00:26:03.150 "psk": "key0" 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_subsystem_add_ns", 00:26:03.150 "params": { 00:26:03.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.150 "namespace": { 00:26:03.150 "nsid": 1, 00:26:03.150 "bdev_name": "malloc0", 00:26:03.150 "nguid": "BAB897FBB0B14D3E83617C37E3D1FF96", 00:26:03.150 "uuid": "bab897fb-b0b1-4d3e-8361-7c37e3d1ff96", 00:26:03.150 "no_auto_visible": false 00:26:03.150 } 00:26:03.150 } 00:26:03.150 }, 00:26:03.150 { 00:26:03.150 "method": "nvmf_subsystem_add_listener", 00:26:03.150 "params": { 00:26:03.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.150 "listen_address": { 00:26:03.150 "trtype": "TCP", 00:26:03.150 "adrfam": "IPv4", 00:26:03.150 "traddr": "10.0.0.2", 00:26:03.150 "trsvcid": "4420" 00:26:03.150 }, 00:26:03.150 "secure_channel": false, 00:26:03.150 "sock_impl": "ssl" 00:26:03.150 } 00:26:03.150 } 00:26:03.150 ] 00:26:03.150 } 00:26:03.150 ] 00:26:03.150 }' 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3455893 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3455893 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3455893 ']' 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.150 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.150 [2024-11-28 12:56:33.231896] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:03.150 [2024-11-28 12:56:33.231956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.410 [2024-11-28 12:56:33.371806] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:03.410 [2024-11-28 12:56:33.426497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.410 [2024-11-28 12:56:33.448434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.410 [2024-11-28 12:56:33.448470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.410 [2024-11-28 12:56:33.448478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.410 [2024-11-28 12:56:33.448483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.410 [2024-11-28 12:56:33.448488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.410 [2024-11-28 12:56:33.449130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.671 [2024-11-28 12:56:33.637632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.671 [2024-11-28 12:56:33.669586] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:03.671 [2024-11-28 12:56:33.669781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.931 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.931 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:03.931 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.931 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.931 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3456002 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3456002 /var/tmp/bdevperf.sock 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3456002 ']' 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:04.192 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:26:04.192 "subsystems": [ 00:26:04.192 { 00:26:04.192 "subsystem": "keyring", 00:26:04.192 "config": [ 00:26:04.192 { 00:26:04.192 "method": "keyring_file_add_key", 00:26:04.192 "params": { 00:26:04.192 "name": "key0", 00:26:04.192 "path": "/tmp/tmp.d3G3e1sSNa" 00:26:04.192 } 00:26:04.192 } 00:26:04.192 ] 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "subsystem": "iobuf", 00:26:04.192 "config": [ 00:26:04.192 { 00:26:04.192 "method": "iobuf_set_options", 00:26:04.192 "params": { 00:26:04.192 "small_pool_count": 8192, 00:26:04.192 "large_pool_count": 1024, 00:26:04.192 "small_bufsize": 8192, 00:26:04.192 "large_bufsize": 135168, 00:26:04.192 "enable_numa": false 00:26:04.192 } 00:26:04.192 } 00:26:04.192 ] 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "subsystem": "sock", 00:26:04.192 "config": [ 00:26:04.192 { 00:26:04.192 "method": "sock_set_default_impl", 00:26:04.192 "params": { 00:26:04.192 "impl_name": "posix" 00:26:04.192 } 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "method": "sock_impl_set_options", 00:26:04.192 "params": { 00:26:04.192 "impl_name": "ssl", 00:26:04.192 "recv_buf_size": 4096, 00:26:04.192 "send_buf_size": 4096, 00:26:04.192 "enable_recv_pipe": true, 00:26:04.192 "enable_quickack": false, 00:26:04.192 "enable_placement_id": 0, 00:26:04.192 "enable_zerocopy_send_server": true, 00:26:04.192 "enable_zerocopy_send_client": false, 00:26:04.192 "zerocopy_threshold": 0, 00:26:04.192 "tls_version": 0, 00:26:04.192 "enable_ktls": false 00:26:04.192 } 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "method": "sock_impl_set_options", 00:26:04.192 "params": { 00:26:04.192 "impl_name": "posix", 00:26:04.192 "recv_buf_size": 2097152, 00:26:04.192 "send_buf_size": 2097152, 00:26:04.192 "enable_recv_pipe": true, 00:26:04.192 "enable_quickack": false, 00:26:04.192 "enable_placement_id": 0, 00:26:04.192 "enable_zerocopy_send_server": true, 00:26:04.192 "enable_zerocopy_send_client": false, 00:26:04.192 "zerocopy_threshold": 0, 00:26:04.192 "tls_version": 0, 00:26:04.192 "enable_ktls": false 00:26:04.192 } 00:26:04.192 } 00:26:04.192 ] 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "subsystem": "vmd", 00:26:04.192 "config": [] 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "subsystem": "accel", 00:26:04.192 "config": [ 00:26:04.192 { 00:26:04.192 "method": "accel_set_options", 00:26:04.192 "params": { 00:26:04.192 "small_cache_size": 128, 00:26:04.192 "large_cache_size": 16, 00:26:04.192 "task_count": 2048, 00:26:04.192 "sequence_count": 2048, 00:26:04.192 "buf_count": 2048 00:26:04.192 } 00:26:04.192 } 00:26:04.192 ] 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "subsystem": "bdev", 00:26:04.192 "config": [ 00:26:04.192 { 00:26:04.192 "method": "bdev_set_options", 00:26:04.192 "params": { 00:26:04.192 "bdev_io_pool_size": 65535, 00:26:04.192 "bdev_io_cache_size": 256, 00:26:04.192 "bdev_auto_examine": true, 00:26:04.192 "iobuf_small_cache_size": 128, 00:26:04.192 "iobuf_large_cache_size": 16 00:26:04.192 } 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "method": "bdev_raid_set_options", 00:26:04.192 "params": { 00:26:04.192 "process_window_size_kb": 1024, 00:26:04.192 "process_max_bandwidth_mb_sec": 0 00:26:04.192 } 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "method": "bdev_iscsi_set_options", 00:26:04.192 "params": { 00:26:04.192 "timeout_sec": 30 00:26:04.192 } 00:26:04.192 }, 00:26:04.192 { 00:26:04.192 "method": "bdev_nvme_set_options", 00:26:04.192 "params": { 00:26:04.192 "action_on_timeout": "none", 00:26:04.192 "timeout_us": 0, 00:26:04.192 "timeout_admin_us": 0, 00:26:04.192 "keep_alive_timeout_ms": 10000, 00:26:04.192 "arbitration_burst": 0, 00:26:04.192 "low_priority_weight": 0, 00:26:04.192 "medium_priority_weight": 0, 00:26:04.192 "high_priority_weight": 0, 00:26:04.192 "nvme_adminq_poll_period_us": 10000, 00:26:04.192 "nvme_ioq_poll_period_us": 0, 00:26:04.192 "io_queue_requests": 512, 00:26:04.192 "delay_cmd_submit": true, 00:26:04.192 "transport_retry_count": 4, 00:26:04.192 "bdev_retry_count": 3, 00:26:04.192 "transport_ack_timeout": 0, 00:26:04.192 "ctrlr_loss_timeout_sec": 0, 00:26:04.192 "reconnect_delay_sec": 0, 00:26:04.192 "fast_io_fail_timeout_sec": 0, 00:26:04.192 "disable_auto_failback": false, 00:26:04.192 "generate_uuids": false, 00:26:04.192 "transport_tos": 0, 00:26:04.192 "nvme_error_stat": false, 00:26:04.192 "rdma_srq_size": 0, 00:26:04.192 "io_path_stat": false, 00:26:04.192 "allow_accel_sequence": false, 00:26:04.192 "rdma_max_cq_size": 0, 00:26:04.192 "rdma_cm_event_timeout_ms": 0, 00:26:04.192 "dhchap_digests": [ 00:26:04.192 "sha256", 00:26:04.192 "sha384", 00:26:04.192 "sha512" 00:26:04.192 ], 00:26:04.192 "dhchap_dhgroups": [ 00:26:04.192 "null", 00:26:04.192 "ffdhe2048", 00:26:04.192 "ffdhe3072", 00:26:04.192 "ffdhe4096", 00:26:04.192 "ffdhe6144", 00:26:04.192 "ffdhe8192" 00:26:04.192 ] 00:26:04.192 } 00:26:04.193 }, 00:26:04.193 { 00:26:04.193 "method": "bdev_nvme_attach_controller", 00:26:04.193 "params": { 00:26:04.193 "name": "nvme0", 00:26:04.193 "trtype": "TCP", 00:26:04.193 "adrfam": "IPv4", 00:26:04.193 "traddr": "10.0.0.2", 00:26:04.193 "trsvcid": "4420", 00:26:04.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.193 "prchk_reftag": false, 00:26:04.193 "prchk_guard": false, 00:26:04.193 "ctrlr_loss_timeout_sec": 0, 00:26:04.193 "reconnect_delay_sec": 0, 00:26:04.193 "fast_io_fail_timeout_sec": 0, 00:26:04.193 "psk": "key0", 00:26:04.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.193 "hdgst": false, 00:26:04.193 "ddgst": false, 00:26:04.193 "multipath": "multipath" 00:26:04.193 } 00:26:04.193 }, 00:26:04.193 { 00:26:04.193 "method": "bdev_nvme_set_hotplug", 00:26:04.193 "params": { 00:26:04.193 "period_us": 100000, 00:26:04.193 "enable": false 00:26:04.193 } 00:26:04.193 }, 00:26:04.193 { 00:26:04.193 "method": "bdev_enable_histogram", 00:26:04.193 "params": { 00:26:04.193 "name": "nvme0n1", 00:26:04.193 "enable": true 00:26:04.193 } 00:26:04.193 }, 00:26:04.193 { 00:26:04.193 "method": "bdev_wait_for_examine" 00:26:04.193 } 00:26:04.193 ] 00:26:04.193 }, 00:26:04.193 { 00:26:04.193 "subsystem": "nbd", 00:26:04.193 "config": [] 00:26:04.193 } 00:26:04.193 ] 00:26:04.193 }' 00:26:04.193 [2024-11-28 12:56:34.113086] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:04.193 [2024-11-28 12:56:34.113138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456002 ] 00:26:04.193 [2024-11-28 12:56:34.245595] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:04.193 [2024-11-28 12:56:34.298415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.193 [2024-11-28 12:56:34.314607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.454 [2024-11-28 12:56:34.444906] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:05.024 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.024 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:05.024 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.024 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:26:05.024 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.024 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.024 Running I/O for 1 seconds... 00:26:06.410 6058.00 IOPS, 23.66 MiB/s 00:26:06.410 Latency(us) 00:26:06.410 [2024-11-28T11:56:36.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.410 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:06.410 Verification LBA range: start 0x0 length 0x2000 00:26:06.410 nvme0n1 : 1.01 6117.13 23.90 0.00 0.00 20796.36 4598.25 70068.56 00:26:06.410 [2024-11-28T11:56:36.537Z] =================================================================================================================== 00:26:06.410 [2024-11-28T11:56:36.537Z] Total : 6117.13 23.90 0.00 0.00 20796.36 4598.25 70068.56 00:26:06.410 { 00:26:06.410 "results": [ 00:26:06.410 { 00:26:06.410 "job": "nvme0n1", 00:26:06.410 "core_mask": "0x2", 00:26:06.410 "workload": "verify", 00:26:06.410 "status": "finished", 00:26:06.410 "verify_range": { 00:26:06.410 "start": 0, 00:26:06.410 "length": 8192 00:26:06.410 }, 00:26:06.410 "queue_depth": 128, 00:26:06.410 "io_size": 4096, 00:26:06.410 "runtime": 1.011258, 00:26:06.410 "iops": 6117.13331316044, 00:26:06.410 "mibps": 23.895052004532968, 00:26:06.410 "io_failed": 0, 00:26:06.410 "io_timeout": 0, 00:26:06.410 "avg_latency_us": 20796.35838726616, 00:26:06.410 "min_latency_us": 4598.249248245907, 00:26:06.410 "max_latency_us": 70068.55997327097 00:26:06.410 } 00:26:06.410 ], 00:26:06.410 "core_count": 1 00:26:06.410 } 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:06.410 nvmf_trace.0 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3456002 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3456002 ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3456002 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456002 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456002' 00:26:06.410 killing process with pid 3456002 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3456002 00:26:06.410 Received shutdown signal, test time was about 1.000000 seconds 00:26:06.410 00:26:06.410 Latency(us) 00:26:06.410 [2024-11-28T11:56:36.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.410 [2024-11-28T11:56:36.537Z] =================================================================================================================== 00:26:06.410 [2024-11-28T11:56:36.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3456002 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.410 rmmod nvme_tcp 00:26:06.410 rmmod nvme_fabrics 00:26:06.410 rmmod nvme_keyring 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3455893 ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3455893 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3455893 ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3455893 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.410 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455893 00:26:06.671 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.671 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.671 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455893' 00:26:06.671 killing process with pid 3455893 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3455893 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3455893 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.672 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IhtLLfysZr /tmp/tmp.j0hK2fNyD6 /tmp/tmp.d3G3e1sSNa 00:26:09.218 00:26:09.218 real 1m28.837s 00:26:09.218 user 2m18.945s 00:26:09.218 sys 0m27.057s 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.218 ************************************ 00:26:09.218 END TEST nvmf_tls 00:26:09.218 ************************************ 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:09.218 ************************************ 00:26:09.218 START TEST nvmf_fips 00:26:09.218 ************************************ 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:09.218 * Looking for test storage... 00:26:09.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:26:09.218 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.218 --rc genhtml_branch_coverage=1 00:26:09.218 --rc genhtml_function_coverage=1 00:26:09.218 --rc genhtml_legend=1 00:26:09.218 --rc geninfo_all_blocks=1 00:26:09.218 --rc geninfo_unexecuted_blocks=1 00:26:09.218 00:26:09.218 ' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.218 --rc genhtml_branch_coverage=1 00:26:09.218 --rc genhtml_function_coverage=1 00:26:09.218 --rc genhtml_legend=1 00:26:09.218 --rc geninfo_all_blocks=1 00:26:09.218 --rc geninfo_unexecuted_blocks=1 00:26:09.218 00:26:09.218 ' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.218 --rc genhtml_branch_coverage=1 00:26:09.218 --rc genhtml_function_coverage=1 00:26:09.218 --rc genhtml_legend=1 00:26:09.218 --rc geninfo_all_blocks=1 00:26:09.218 --rc geninfo_unexecuted_blocks=1 00:26:09.218 00:26:09.218 ' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.218 --rc genhtml_branch_coverage=1 00:26:09.218 --rc genhtml_function_coverage=1 00:26:09.218 --rc genhtml_legend=1 00:26:09.218 --rc geninfo_all_blocks=1 00:26:09.218 --rc geninfo_unexecuted_blocks=1 00:26:09.218 00:26:09.218 ' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.218 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:26:09.219 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:26:09.220 Error setting digest 00:26:09.220 40328EE2F17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:26:09.220 40328EE2F17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.220 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:17.382 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.382 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:17.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:17.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:17.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:26:17.383 00:26:17.383 --- 10.0.0.2 ping statistics --- 00:26:17.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.383 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:26:17.383 00:26:17.383 --- 10.0.0.1 ping statistics --- 00:26:17.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.383 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3460705 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3460705 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3460705 ']' 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.383 12:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:17.383 [2024-11-28 12:56:46.914363] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:17.383 [2024-11-28 12:56:46.914439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.383 [2024-11-28 12:56:47.059134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:17.383 [2024-11-28 12:56:47.117939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.383 [2024-11-28 12:56:47.143582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.383 [2024-11-28 12:56:47.143626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.383 [2024-11-28 12:56:47.143634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.383 [2024-11-28 12:56:47.143641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.383 [2024-11-28 12:56:47.143647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.383 [2024-11-28 12:56:47.144350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.aH5 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.aH5 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.aH5 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.aH5 00:26:17.645 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:17.906 [2024-11-28 12:56:47.926516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.906 [2024-11-28 12:56:47.942487] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:17.906 [2024-11-28 12:56:47.942767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.906 malloc0 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3461055 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3461055 /var/tmp/bdevperf.sock 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3461055 ']' 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.906 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:18.168 [2024-11-28 12:56:48.087945] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:18.168 [2024-11-28 12:56:48.088022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461055 ] 00:26:18.168 [2024-11-28 12:56:48.225049] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:18.168 [2024-11-28 12:56:48.285181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.430 [2024-11-28 12:56:48.312850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.004 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.004 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:26:19.004 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.aH5 00:26:19.004 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:19.266 [2024-11-28 12:56:49.271592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:19.266 TLSTESTn1 00:26:19.266 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:19.528 Running I/O for 10 seconds... 00:26:21.416 3494.00 IOPS, 13.65 MiB/s [2024-11-28T11:56:52.485Z] 4336.50 IOPS, 16.94 MiB/s [2024-11-28T11:56:53.870Z] 4741.33 IOPS, 18.52 MiB/s [2024-11-28T11:56:54.813Z] 5122.25 IOPS, 20.01 MiB/s [2024-11-28T11:56:55.754Z] 5010.60 IOPS, 19.57 MiB/s [2024-11-28T11:56:56.696Z] 5002.50 IOPS, 19.54 MiB/s [2024-11-28T11:56:57.639Z] 5185.43 IOPS, 20.26 MiB/s [2024-11-28T11:56:58.668Z] 5216.88 IOPS, 20.38 MiB/s [2024-11-28T11:56:59.659Z] 5300.67 IOPS, 20.71 MiB/s [2024-11-28T11:56:59.659Z] 5382.80 IOPS, 21.03 MiB/s 00:26:29.532 Latency(us) 00:26:29.532 [2024-11-28T11:56:59.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.532 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.532 Verification LBA range: start 0x0 length 0x2000 00:26:29.532 TLSTESTn1 : 10.02 5387.49 21.04 0.00 0.00 23720.46 6185.74 40289.42 00:26:29.532 [2024-11-28T11:56:59.659Z] =================================================================================================================== 00:26:29.532 [2024-11-28T11:56:59.659Z] Total : 5387.49 21.04 0.00 0.00 23720.46 6185.74 40289.42 00:26:29.532 { 00:26:29.532 "results": [ 00:26:29.532 { 00:26:29.532 "job": "TLSTESTn1", 00:26:29.532 "core_mask": "0x4", 00:26:29.533 "workload": "verify", 00:26:29.533 "status": "finished", 00:26:29.533 "verify_range": { 00:26:29.533 "start": 0, 00:26:29.533 "length": 8192 00:26:29.533 }, 00:26:29.533 "queue_depth": 128, 00:26:29.533 "io_size": 4096, 00:26:29.533 "runtime": 10.015051, 00:26:29.533 "iops": 5387.49128686414, 00:26:29.533 "mibps": 21.04488783931305, 00:26:29.533 "io_failed": 0, 00:26:29.533 "io_timeout": 0, 00:26:29.533 "avg_latency_us": 23720.46271309359, 00:26:29.533 "min_latency_us": 6185.740060140328, 00:26:29.533 "max_latency_us": 40289.421984630804 00:26:29.533 } 00:26:29.533 ], 00:26:29.533 "core_count": 1 00:26:29.533 } 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:29.533 nvmf_trace.0 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3461055 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3461055 ']' 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3461055 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.533 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461055 00:26:29.799 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:29.799 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:29.799 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461055' 00:26:29.799 killing process with pid 3461055 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3461055 00:26:29.800 Received shutdown signal, test time was about 10.000000 seconds 00:26:29.800 00:26:29.800 Latency(us) 00:26:29.800 [2024-11-28T11:56:59.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.800 [2024-11-28T11:56:59.927Z] =================================================================================================================== 00:26:29.800 [2024-11-28T11:56:59.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3461055 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.800 rmmod nvme_tcp 00:26:29.800 rmmod nvme_fabrics 00:26:29.800 rmmod nvme_keyring 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3460705 ']' 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3460705 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3460705 ']' 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3460705 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460705 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460705' 00:26:29.800 killing process with pid 3460705 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3460705 00:26:29.800 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3460705 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.062 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.980 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.aH5 00:26:32.241 00:26:32.241 real 0m23.258s 00:26:32.241 user 0m24.847s 00:26:32.241 sys 0m9.589s 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:32.241 ************************************ 00:26:32.241 END TEST nvmf_fips 00:26:32.241 ************************************ 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:32.241 ************************************ 00:26:32.241 START TEST nvmf_control_msg_list 00:26:32.241 ************************************ 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:32.241 * Looking for test storage... 00:26:32.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:32.241 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:26:32.242 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.505 --rc genhtml_branch_coverage=1 00:26:32.505 --rc genhtml_function_coverage=1 00:26:32.505 --rc genhtml_legend=1 00:26:32.505 --rc geninfo_all_blocks=1 00:26:32.505 --rc geninfo_unexecuted_blocks=1 00:26:32.505 00:26:32.505 ' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.505 --rc genhtml_branch_coverage=1 00:26:32.505 --rc genhtml_function_coverage=1 00:26:32.505 --rc genhtml_legend=1 00:26:32.505 --rc geninfo_all_blocks=1 00:26:32.505 --rc geninfo_unexecuted_blocks=1 00:26:32.505 00:26:32.505 ' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.505 --rc genhtml_branch_coverage=1 00:26:32.505 --rc genhtml_function_coverage=1 00:26:32.505 --rc genhtml_legend=1 00:26:32.505 --rc geninfo_all_blocks=1 00:26:32.505 --rc geninfo_unexecuted_blocks=1 00:26:32.505 00:26:32.505 ' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:32.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.505 --rc genhtml_branch_coverage=1 00:26:32.505 --rc genhtml_function_coverage=1 00:26:32.505 --rc genhtml_legend=1 00:26:32.505 --rc geninfo_all_blocks=1 00:26:32.505 --rc geninfo_unexecuted_blocks=1 00:26:32.505 00:26:32.505 ' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.505 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.506 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:40.651 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:40.651 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:40.651 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:40.651 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.651 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:26:40.652 00:26:40.652 --- 10.0.0.2 ping statistics --- 00:26:40.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.652 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:26:40.652 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:26:40.652 00:26:40.652 --- 10.0.0.1 ping statistics --- 00:26:40.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.652 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3467726 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3467726 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3467726 ']' 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.652 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.652 [2024-11-28 12:57:10.126606] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:40.652 [2024-11-28 12:57:10.126688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.652 [2024-11-28 12:57:10.273301] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:40.652 [2024-11-28 12:57:10.334229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.652 [2024-11-28 12:57:10.360695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.652 [2024-11-28 12:57:10.360738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.652 [2024-11-28 12:57:10.360746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.652 [2024-11-28 12:57:10.360753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.652 [2024-11-28 12:57:10.360760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.652 [2024-11-28 12:57:10.361502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.934 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.935 [2024-11-28 12:57:10.982872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.935 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.935 Malloc0 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:40.935 [2024-11-28 12:57:11.037113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3468212 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3468214 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3468215 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3468212 00:26:40.935 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.197 [2024-11-28 12:57:11.247884] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:41.197 [2024-11-28 12:57:11.248138] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:41.197 [2024-11-28 12:57:11.248507] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:42.582 Initializing NVMe Controllers 00:26:42.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:42.582 Initialization complete. Launching workers. 00:26:42.582 ======================================================== 00:26:42.582 Latency(us) 00:26:42.582 Device Information : IOPS MiB/s Average min max 00:26:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40992.40 40871.13 41038.39 00:26:42.582 ======================================================== 00:26:42.582 Total : 25.00 0.10 40992.40 40871.13 41038.39 00:26:42.582 00:26:42.582 Initializing NVMe Controllers 00:26:42.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:42.582 Initialization complete. Launching workers. 00:26:42.582 ======================================================== 00:26:42.582 Latency(us) 00:26:42.582 Device Information : IOPS MiB/s Average min max 00:26:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40992.47 40822.48 41039.25 00:26:42.582 ======================================================== 00:26:42.582 Total : 25.00 0.10 40992.47 40822.48 41039.25 00:26:42.582 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3468214 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3468215 00:26:42.582 Initializing NVMe Controllers 00:26:42.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:42.582 Initialization complete. Launching workers. 00:26:42.582 ======================================================== 00:26:42.582 Latency(us) 00:26:42.582 Device Information : IOPS MiB/s Average min max 00:26:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40994.39 40832.09 41064.08 00:26:42.582 ======================================================== 00:26:42.582 Total : 25.00 0.10 40994.39 40832.09 41064.08 00:26:42.582 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.582 rmmod nvme_tcp 00:26:42.582 rmmod nvme_fabrics 00:26:42.582 rmmod nvme_keyring 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3467726 ']' 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3467726 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3467726 ']' 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3467726 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:26:42.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467726 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467726' 00:26:42.583 killing process with pid 3467726 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3467726 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3467726 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.583 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.842 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.753 00:26:44.753 real 0m12.586s 00:26:44.753 user 0m8.016s 00:26:44.753 sys 0m6.536s 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:44.753 ************************************ 00:26:44.753 END TEST nvmf_control_msg_list 00:26:44.753 ************************************ 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:44.753 ************************************ 00:26:44.753 START TEST nvmf_wait_for_buf 00:26:44.753 ************************************ 00:26:44.753 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:45.014 * Looking for test storage... 00:26:45.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:45.014 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.014 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.014 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.014 --rc genhtml_branch_coverage=1 00:26:45.014 --rc genhtml_function_coverage=1 00:26:45.014 --rc genhtml_legend=1 00:26:45.014 --rc geninfo_all_blocks=1 00:26:45.014 --rc geninfo_unexecuted_blocks=1 00:26:45.014 00:26:45.014 ' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.014 --rc genhtml_branch_coverage=1 00:26:45.014 --rc genhtml_function_coverage=1 00:26:45.014 --rc genhtml_legend=1 00:26:45.014 --rc geninfo_all_blocks=1 00:26:45.014 --rc geninfo_unexecuted_blocks=1 00:26:45.014 00:26:45.014 ' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.014 --rc genhtml_branch_coverage=1 00:26:45.014 --rc genhtml_function_coverage=1 00:26:45.014 --rc genhtml_legend=1 00:26:45.014 --rc geninfo_all_blocks=1 00:26:45.014 --rc geninfo_unexecuted_blocks=1 00:26:45.014 00:26:45.014 ' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.014 --rc genhtml_branch_coverage=1 00:26:45.014 --rc genhtml_function_coverage=1 00:26:45.014 --rc genhtml_legend=1 00:26:45.014 --rc geninfo_all_blocks=1 00:26:45.014 --rc geninfo_unexecuted_blocks=1 00:26:45.014 00:26:45.014 ' 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.014 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.015 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.156 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.156 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.156 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:26:53.157 00:26:53.157 --- 10.0.0.2 ping statistics --- 00:26:53.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.157 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:26:53.157 00:26:53.157 --- 10.0.0.1 ping statistics --- 00:26:53.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.157 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3472672 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3472672 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3472672 ']' 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.157 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.157 [2024-11-28 12:57:22.761687] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:53.157 [2024-11-28 12:57:22.761750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.157 [2024-11-28 12:57:22.905693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.157 [2024-11-28 12:57:22.963298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.157 [2024-11-28 12:57:22.989455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.157 [2024-11-28 12:57:22.989499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.157 [2024-11-28 12:57:22.989508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.157 [2024-11-28 12:57:22.989515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.157 [2024-11-28 12:57:22.989521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.157 [2024-11-28 12:57:22.990243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 Malloc0 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 [2024-11-28 12:57:23.728455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:53.729 [2024-11-28 12:57:23.764734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.729 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.990 [2024-11-28 12:57:23.973273] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:55.376 Initializing NVMe Controllers 00:26:55.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:55.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:55.376 Initialization complete. Launching workers. 00:26:55.376 ======================================================== 00:26:55.376 Latency(us) 00:26:55.376 Device Information : IOPS MiB/s Average min max 00:26:55.376 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.99 15.50 33675.81 8031.73 71999.70 00:26:55.376 ======================================================== 00:26:55.376 Total : 123.99 15.50 33675.81 8031.73 71999.70 00:26:55.376 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.376 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.376 rmmod nvme_tcp 00:26:55.376 rmmod nvme_fabrics 00:26:55.376 rmmod nvme_keyring 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3472672 ']' 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3472672 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3472672 ']' 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3472672 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472672 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472672' 00:26:55.637 killing process with pid 3472672 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3472672 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3472672 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.637 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.192 00:26:58.192 real 0m12.955s 00:26:58.192 user 0m5.172s 00:26:58.192 sys 0m6.259s 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:58.192 ************************************ 00:26:58.192 END TEST nvmf_wait_for_buf 00:26:58.192 ************************************ 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:58.192 ************************************ 00:26:58.192 START TEST nvmf_fuzz 00:26:58.192 ************************************ 00:26:58.192 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:58.192 * Looking for test storage... 00:26:58.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.192 --rc genhtml_branch_coverage=1 00:26:58.192 --rc genhtml_function_coverage=1 00:26:58.192 --rc genhtml_legend=1 00:26:58.192 --rc geninfo_all_blocks=1 00:26:58.192 --rc geninfo_unexecuted_blocks=1 00:26:58.192 00:26:58.192 ' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.192 --rc genhtml_branch_coverage=1 00:26:58.192 --rc genhtml_function_coverage=1 00:26:58.192 --rc genhtml_legend=1 00:26:58.192 --rc geninfo_all_blocks=1 00:26:58.192 --rc geninfo_unexecuted_blocks=1 00:26:58.192 00:26:58.192 ' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.192 --rc genhtml_branch_coverage=1 00:26:58.192 --rc genhtml_function_coverage=1 00:26:58.192 --rc genhtml_legend=1 00:26:58.192 --rc geninfo_all_blocks=1 00:26:58.192 --rc geninfo_unexecuted_blocks=1 00:26:58.192 00:26:58.192 ' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.192 --rc genhtml_branch_coverage=1 00:26:58.192 --rc genhtml_function_coverage=1 00:26:58.192 --rc genhtml_legend=1 00:26:58.192 --rc geninfo_all_blocks=1 00:26:58.192 --rc geninfo_unexecuted_blocks=1 00:26:58.192 00:26:58.192 ' 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.192 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.193 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:06.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:06.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:06.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:06.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.337 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:27:06.338 00:27:06.338 --- 10.0.0.2 ping statistics --- 00:27:06.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.338 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:27:06.338 00:27:06.338 --- 10.0.0.1 ping statistics --- 00:27:06.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.338 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3477407 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3477407 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3477407 ']' 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.338 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.599 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.599 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:27:06.599 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.600 Malloc0 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:06.600 12:57:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:38.727 Fuzzing completed. Shutting down the fuzz application 00:27:38.727 00:27:38.728 Dumping successful admin opcodes: 00:27:38.728 9, 10, 00:27:38.728 Dumping successful io opcodes: 00:27:38.728 0, 9, 00:27:38.728 NS: 0x2000008eff00 I/O qp, Total commands completed: 1166430, total successful commands: 6864, random_seed: 3169938880 00:27:38.728 NS: 0x2000008eff00 admin qp, Total commands completed: 153056, total successful commands: 34, random_seed: 678773184 00:27:38.728 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:38.728 Fuzzing completed. Shutting down the fuzz application 00:27:38.728 00:27:38.728 Dumping successful admin opcodes: 00:27:38.728 00:27:38.728 Dumping successful io opcodes: 00:27:38.728 00:27:38.728 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4215514459 00:27:38.728 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 4215585875 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.728 rmmod nvme_tcp 00:27:38.728 rmmod nvme_fabrics 00:27:38.728 rmmod nvme_keyring 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3477407 ']' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3477407 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3477407 ']' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3477407 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3477407 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3477407' 00:27:38.728 killing process with pid 3477407 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3477407 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3477407 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.728 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:41.277 00:27:41.277 real 0m43.018s 00:27:41.277 user 0m56.584s 00:27:41.277 sys 0m15.737s 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:41.277 ************************************ 00:27:41.277 END TEST nvmf_fuzz 00:27:41.277 ************************************ 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:41.277 12:58:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:41.277 ************************************ 00:27:41.277 START TEST nvmf_multiconnection 00:27:41.277 ************************************ 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:41.277 * Looking for test storage... 00:27:41.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:41.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.277 --rc genhtml_branch_coverage=1 00:27:41.277 --rc genhtml_function_coverage=1 00:27:41.277 --rc genhtml_legend=1 00:27:41.277 --rc geninfo_all_blocks=1 00:27:41.277 --rc geninfo_unexecuted_blocks=1 00:27:41.277 00:27:41.277 ' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:41.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.277 --rc genhtml_branch_coverage=1 00:27:41.277 --rc genhtml_function_coverage=1 00:27:41.277 --rc genhtml_legend=1 00:27:41.277 --rc geninfo_all_blocks=1 00:27:41.277 --rc geninfo_unexecuted_blocks=1 00:27:41.277 00:27:41.277 ' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:41.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.277 --rc genhtml_branch_coverage=1 00:27:41.277 --rc genhtml_function_coverage=1 00:27:41.277 --rc genhtml_legend=1 00:27:41.277 --rc geninfo_all_blocks=1 00:27:41.277 --rc geninfo_unexecuted_blocks=1 00:27:41.277 00:27:41.277 ' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:41.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.277 --rc genhtml_branch_coverage=1 00:27:41.277 --rc genhtml_function_coverage=1 00:27:41.277 --rc genhtml_legend=1 00:27:41.277 --rc geninfo_all_blocks=1 00:27:41.277 --rc geninfo_unexecuted_blocks=1 00:27:41.277 00:27:41.277 ' 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.277 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:41.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.278 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:49.417 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:49.417 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.417 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:49.418 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:49.418 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:27:49.418 00:27:49.418 --- 10.0.0.2 ping statistics --- 00:27:49.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.418 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:27:49.418 00:27:49.418 --- 10.0.0.1 ping statistics --- 00:27:49.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.418 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3487947 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3487947 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3487947 ']' 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.418 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.418 [2024-11-28 12:58:18.861025] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:27:49.418 [2024-11-28 12:58:18.861093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.418 [2024-11-28 12:58:19.005927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:49.418 [2024-11-28 12:58:19.065680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.418 [2024-11-28 12:58:19.095156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.418 [2024-11-28 12:58:19.095211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.418 [2024-11-28 12:58:19.095219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.418 [2024-11-28 12:58:19.095226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.418 [2024-11-28 12:58:19.095233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.418 [2024-11-28 12:58:19.097106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.418 [2024-11-28 12:58:19.097266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.418 [2024-11-28 12:58:19.097310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.418 [2024-11-28 12:58:19.097312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.680 [2024-11-28 12:58:19.745884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.680 Malloc1 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.680 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.942 [2024-11-28 12:58:19.829857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.942 Malloc2 00:27:49.942 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 Malloc3 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 Malloc4 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 Malloc5 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.205 Malloc6 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.205 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 Malloc7 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 Malloc8 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 Malloc9 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.206 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 Malloc10 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 Malloc11 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.468 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:52.384 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:52.384 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:52.384 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:52.384 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:52.384 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:54.299 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:55.681 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:55.681 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:55.681 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:55.681 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:55.682 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.598 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:59.545 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:59.545 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:59.545 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:59.545 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:59.545 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:01.584 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:28:02.986 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:28:02.986 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:02.986 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:02.986 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:02.986 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.897 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:28:06.806 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:28:06.806 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:06.806 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:06.806 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:06.806 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:08.717 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:28:10.632 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:28:10.632 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:10.632 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:10.632 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:10.632 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:12.540 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:28:13.919 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:28:13.919 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:13.919 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:13.919 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:13.919 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:16.457 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:28:17.840 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:28:17.840 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:17.840 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:17.840 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:17.840 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.749 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:28:21.659 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:21.659 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:21.659 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:21.659 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:21.659 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:23.568 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:28:25.478 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:25.478 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:25.478 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:25.478 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:25.478 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:27.392 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:28:29.302 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:29.302 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:28:29.302 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:29.302 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:29.302 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:28:31.206 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:31.206 [global] 00:28:31.206 thread=1 00:28:31.206 invalidate=1 00:28:31.206 rw=read 00:28:31.206 time_based=1 00:28:31.206 runtime=10 00:28:31.206 ioengine=libaio 00:28:31.206 direct=1 00:28:31.206 bs=262144 00:28:31.206 iodepth=64 00:28:31.206 norandommap=1 00:28:31.206 numjobs=1 00:28:31.206 00:28:31.206 [job0] 00:28:31.206 filename=/dev/nvme0n1 00:28:31.206 [job1] 00:28:31.206 filename=/dev/nvme10n1 00:28:31.206 [job2] 00:28:31.206 filename=/dev/nvme1n1 00:28:31.206 [job3] 00:28:31.206 filename=/dev/nvme2n1 00:28:31.206 [job4] 00:28:31.206 filename=/dev/nvme3n1 00:28:31.206 [job5] 00:28:31.206 filename=/dev/nvme4n1 00:28:31.206 [job6] 00:28:31.206 filename=/dev/nvme5n1 00:28:31.206 [job7] 00:28:31.206 filename=/dev/nvme6n1 00:28:31.465 [job8] 00:28:31.465 filename=/dev/nvme7n1 00:28:31.465 [job9] 00:28:31.465 filename=/dev/nvme8n1 00:28:31.465 [job10] 00:28:31.465 filename=/dev/nvme9n1 00:28:31.465 Could not set queue depth (nvme0n1) 00:28:31.465 Could not set queue depth (nvme10n1) 00:28:31.465 Could not set queue depth (nvme1n1) 00:28:31.465 Could not set queue depth (nvme2n1) 00:28:31.465 Could not set queue depth (nvme3n1) 00:28:31.465 Could not set queue depth (nvme4n1) 00:28:31.465 Could not set queue depth (nvme5n1) 00:28:31.465 Could not set queue depth (nvme6n1) 00:28:31.465 Could not set queue depth (nvme7n1) 00:28:31.465 Could not set queue depth (nvme8n1) 00:28:31.465 Could not set queue depth (nvme9n1) 00:28:31.723 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.723 fio-3.35 00:28:31.723 Starting 11 threads 00:28:43.951 00:28:43.951 job0: (groupid=0, jobs=1): err= 0: pid=3496441: Thu Nov 28 12:59:12 2024 00:28:43.951 read: IOPS=376, BW=94.2MiB/s (98.8MB/s)(955MiB/10140msec) 00:28:43.951 slat (usec): min=11, max=122399, avg=2474.81, stdev=8355.13 00:28:43.951 clat (msec): min=14, max=602, avg=167.08, stdev=101.83 00:28:43.951 lat (msec): min=15, max=602, avg=169.55, stdev=103.13 00:28:43.951 clat percentiles (msec): 00:28:43.951 | 1.00th=[ 56], 5.00th=[ 89], 10.00th=[ 96], 20.00th=[ 105], 00:28:43.951 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 120], 60.00th=[ 127], 00:28:43.951 | 70.00th=[ 150], 80.00th=[ 249], 90.00th=[ 326], 95.00th=[ 405], 00:28:43.951 | 99.00th=[ 481], 99.50th=[ 498], 99.90th=[ 592], 99.95th=[ 600], 00:28:43.951 | 99.99th=[ 600] 00:28:43.951 bw ( KiB/s): min=33792, max=159232, per=9.71%, avg=96179.20, stdev=47738.63, samples=20 00:28:43.951 iops : min= 132, max= 622, avg=375.70, stdev=186.48, samples=20 00:28:43.951 lat (msec) : 20=0.26%, 50=0.55%, 100=13.50%, 250=65.98%, 500=19.24% 00:28:43.951 lat (msec) : 750=0.47% 00:28:43.951 cpu : usr=0.19%, sys=1.38%, ctx=659, majf=0, minf=3534 00:28:43.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:43.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.951 issued rwts: total=3821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.951 job1: (groupid=0, jobs=1): err= 0: pid=3496442: Thu Nov 28 12:59:12 2024 00:28:43.951 read: IOPS=1316, BW=329MiB/s (345MB/s)(3299MiB/10022msec) 00:28:43.951 slat (usec): min=9, max=43851, avg=754.41, stdev=2416.67 00:28:43.951 clat (msec): min=12, max=159, avg=47.78, stdev=23.78 00:28:43.951 lat (msec): min=14, max=169, avg=48.53, stdev=24.14 00:28:43.951 clat percentiles (msec): 00:28:43.951 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:28:43.951 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:28:43.951 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 58], 95.00th=[ 118], 00:28:43.951 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 157], 00:28:43.951 | 99.99th=[ 161] 00:28:43.951 bw ( KiB/s): min=132360, max=442368, per=33.93%, avg=336192.40, stdev=107147.74, samples=20 00:28:43.951 iops : min= 517, max= 1728, avg=1313.25, stdev=418.55, samples=20 00:28:43.951 lat (msec) : 20=0.08%, 50=88.45%, 100=3.53%, 250=7.93% 00:28:43.951 cpu : usr=0.56%, sys=4.34%, ctx=1524, majf=0, minf=4097 00:28:43.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:28:43.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.951 issued rwts: total=13195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.951 job2: (groupid=0, jobs=1): err= 0: pid=3496443: Thu Nov 28 12:59:12 2024 00:28:43.951 read: IOPS=618, BW=155MiB/s (162MB/s)(1562MiB/10107msec) 00:28:43.951 slat (usec): min=11, max=104553, avg=1528.44, stdev=6004.88 00:28:43.951 clat (msec): min=12, max=509, avg=101.89, stdev=99.26 00:28:43.951 lat (msec): min=14, max=509, avg=103.42, stdev=100.60 00:28:43.951 clat percentiles (msec): 00:28:43.951 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:28:43.951 | 30.00th=[ 40], 40.00th=[ 42], 50.00th=[ 44], 60.00th=[ 79], 00:28:43.951 | 70.00th=[ 115], 80.00th=[ 153], 90.00th=[ 253], 95.00th=[ 334], 00:28:43.951 | 99.00th=[ 460], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:28:43.951 | 99.99th=[ 510] 00:28:43.951 bw ( KiB/s): min=33280, max=427008, per=15.98%, avg=158336.00, stdev=139418.35, samples=20 00:28:43.951 iops : min= 130, max= 1668, avg=618.50, stdev=544.60, samples=20 00:28:43.951 lat (msec) : 20=0.61%, 50=52.24%, 100=12.28%, 250=24.68%, 500=10.08% 00:28:43.951 lat (msec) : 750=0.11% 00:28:43.951 cpu : usr=0.22%, sys=2.20%, ctx=932, majf=0, minf=4097 00:28:43.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:43.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.951 issued rwts: total=6248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.951 job3: (groupid=0, jobs=1): err= 0: pid=3496444: Thu Nov 28 12:59:12 2024 00:28:43.951 read: IOPS=364, BW=91.0MiB/s (95.4MB/s)(922MiB/10126msec) 00:28:43.951 slat (usec): min=12, max=313912, avg=2501.15, stdev=9985.74 00:28:43.951 clat (msec): min=21, max=804, avg=173.17, stdev=131.45 00:28:43.951 lat (msec): min=21, max=889, avg=175.67, stdev=132.99 00:28:43.951 clat percentiles (msec): 00:28:43.951 | 1.00th=[ 44], 5.00th=[ 75], 10.00th=[ 89], 20.00th=[ 102], 00:28:43.951 | 30.00th=[ 107], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 122], 00:28:43.951 | 70.00th=[ 132], 80.00th=[ 259], 90.00th=[ 388], 95.00th=[ 447], 00:28:43.951 | 99.00th=[ 735], 99.50th=[ 743], 99.90th=[ 793], 99.95th=[ 802], 00:28:43.951 | 99.99th=[ 802] 00:28:43.951 bw ( KiB/s): min=28160, max=160768, per=9.36%, avg=92723.20, stdev=51710.51, samples=20 00:28:43.951 iops : min= 110, max= 628, avg=362.20, stdev=201.99, samples=20 00:28:43.951 lat (msec) : 50=1.09%, 100=17.15%, 250=60.72%, 500=18.64%, 750=1.95% 00:28:43.951 lat (msec) : 1000=0.46% 00:28:43.951 cpu : usr=0.12%, sys=1.38%, ctx=658, majf=0, minf=4097 00:28:43.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:43.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.951 issued rwts: total=3686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.951 job4: (groupid=0, jobs=1): err= 0: pid=3496445: Thu Nov 28 12:59:12 2024 00:28:43.951 read: IOPS=141, BW=35.5MiB/s (37.2MB/s)(359MiB/10128msec) 00:28:43.951 slat (usec): min=12, max=588129, avg=5059.10, stdev=30735.93 00:28:43.951 clat (msec): min=18, max=1314, avg=445.51, stdev=239.02 00:28:43.951 lat (msec): min=18, max=1314, avg=450.57, stdev=242.22 00:28:43.951 clat percentiles (msec): 00:28:43.951 | 1.00th=[ 36], 5.00th=[ 129], 10.00th=[ 176], 20.00th=[ 262], 00:28:43.951 | 30.00th=[ 313], 40.00th=[ 342], 50.00th=[ 384], 60.00th=[ 439], 00:28:43.951 | 70.00th=[ 502], 80.00th=[ 701], 90.00th=[ 827], 95.00th=[ 911], 00:28:43.951 | 99.00th=[ 995], 99.50th=[ 1003], 99.90th=[ 1267], 99.95th=[ 1318], 00:28:43.951 | 99.99th=[ 1318] 00:28:43.951 bw ( KiB/s): min= 3072, max=80384, per=3.74%, avg=37025.68, stdev=17043.66, samples=19 00:28:43.951 iops : min= 12, max= 314, avg=144.63, stdev=66.58, samples=19 00:28:43.951 lat (msec) : 20=0.07%, 50=1.53%, 100=1.18%, 250=16.28%, 500=51.22% 00:28:43.951 lat (msec) : 750=13.29%, 1000=16.14%, 2000=0.28% 00:28:43.951 cpu : usr=0.05%, sys=0.55%, ctx=291, majf=0, minf=4097 00:28:43.951 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:28:43.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.951 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.951 job5: (groupid=0, jobs=1): err= 0: pid=3496446: Thu Nov 28 12:59:12 2024 00:28:43.951 read: IOPS=194, BW=48.6MiB/s (51.0MB/s)(492MiB/10114msec) 00:28:43.951 slat (usec): min=11, max=357478, avg=3896.50, stdev=19787.27 00:28:43.951 clat (msec): min=15, max=932, avg=324.65, stdev=246.32 00:28:43.951 lat (msec): min=15, max=1050, avg=328.55, stdev=250.16 00:28:43.951 clat percentiles (msec): 00:28:43.951 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 99], 00:28:43.951 | 30.00th=[ 163], 40.00th=[ 205], 50.00th=[ 275], 60.00th=[ 338], 00:28:43.951 | 70.00th=[ 401], 80.00th=[ 542], 90.00th=[ 743], 95.00th=[ 810], 00:28:43.951 | 99.00th=[ 852], 99.50th=[ 869], 99.90th=[ 919], 99.95th=[ 936], 00:28:43.951 | 99.99th=[ 936] 00:28:43.951 bw ( KiB/s): min=10240, max=129024, per=4.92%, avg=48716.80, stdev=33852.08, samples=20 00:28:43.951 iops : min= 40, max= 504, avg=190.30, stdev=132.23, samples=20 00:28:43.951 lat (msec) : 20=0.25%, 50=15.51%, 100=4.52%, 250=25.88%, 500=32.74% 00:28:43.951 lat (msec) : 750=11.74%, 1000=9.35% 00:28:43.951 cpu : usr=0.05%, sys=0.76%, ctx=464, majf=0, minf=4097 00:28:43.951 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:28:43.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.951 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.951 job6: (groupid=0, jobs=1): err= 0: pid=3496447: Thu Nov 28 12:59:12 2024 00:28:43.952 read: IOPS=128, BW=32.2MiB/s (33.8MB/s)(325MiB/10103msec) 00:28:43.952 slat (usec): min=12, max=370482, avg=6504.51, stdev=30033.63 00:28:43.952 clat (msec): min=11, max=1044, avg=489.81, stdev=235.33 00:28:43.952 lat (msec): min=11, max=1159, avg=496.31, stdev=239.88 00:28:43.952 clat percentiles (msec): 00:28:43.952 | 1.00th=[ 67], 5.00th=[ 144], 10.00th=[ 163], 20.00th=[ 275], 00:28:43.952 | 30.00th=[ 347], 40.00th=[ 418], 50.00th=[ 472], 60.00th=[ 514], 00:28:43.952 | 70.00th=[ 584], 80.00th=[ 760], 90.00th=[ 852], 95.00th=[ 894], 00:28:43.952 | 99.00th=[ 927], 99.50th=[ 986], 99.90th=[ 1036], 99.95th=[ 1045], 00:28:43.952 | 99.99th=[ 1045] 00:28:43.952 bw ( KiB/s): min=11264, max=56832, per=3.20%, avg=31667.20, stdev=14711.50, samples=20 00:28:43.952 iops : min= 44, max= 222, avg=123.70, stdev=57.47, samples=20 00:28:43.952 lat (msec) : 20=0.38%, 50=0.08%, 100=0.85%, 250=14.83%, 500=40.97% 00:28:43.952 lat (msec) : 750=21.68%, 1000=20.98%, 2000=0.23% 00:28:43.952 cpu : usr=0.09%, sys=0.51%, ctx=233, majf=0, minf=4097 00:28:43.952 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:28:43.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.952 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.952 issued rwts: total=1301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.952 job7: (groupid=0, jobs=1): err= 0: pid=3496450: Thu Nov 28 12:59:12 2024 00:28:43.952 read: IOPS=141, BW=35.3MiB/s (37.0MB/s)(357MiB/10117msec) 00:28:43.952 slat (usec): min=11, max=490738, avg=4951.66, stdev=27886.11 00:28:43.952 clat (msec): min=15, max=1092, avg=448.04, stdev=245.44 00:28:43.952 lat (msec): min=16, max=1275, avg=452.99, stdev=249.60 00:28:43.952 clat percentiles (msec): 00:28:43.952 | 1.00th=[ 89], 5.00th=[ 110], 10.00th=[ 128], 20.00th=[ 171], 00:28:43.952 | 30.00th=[ 275], 40.00th=[ 359], 50.00th=[ 426], 60.00th=[ 472], 00:28:43.952 | 70.00th=[ 600], 80.00th=[ 718], 90.00th=[ 785], 95.00th=[ 869], 00:28:43.952 | 99.00th=[ 919], 99.50th=[ 919], 99.90th=[ 919], 99.95th=[ 1099], 00:28:43.952 | 99.99th=[ 1099] 00:28:43.952 bw ( KiB/s): min=10752, max=113664, per=3.52%, avg=34892.80, stdev=23446.48, samples=20 00:28:43.952 iops : min= 42, max= 444, avg=136.30, stdev=91.59, samples=20 00:28:43.952 lat (msec) : 20=0.21%, 50=0.70%, 100=3.15%, 250=20.74%, 500=36.79% 00:28:43.952 lat (msec) : 750=22.35%, 1000=15.98%, 2000=0.07% 00:28:43.952 cpu : usr=0.04%, sys=0.55%, ctx=277, majf=0, minf=4097 00:28:43.952 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:28:43.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.952 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.952 issued rwts: total=1427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.952 job8: (groupid=0, jobs=1): err= 0: pid=3496451: Thu Nov 28 12:59:12 2024 00:28:43.952 read: IOPS=131, BW=32.8MiB/s (34.4MB/s)(331MiB/10098msec) 00:28:43.952 slat (usec): min=13, max=428607, avg=7557.75, stdev=28907.02 00:28:43.952 clat (msec): min=53, max=1038, avg=479.91, stdev=211.27 00:28:43.952 lat (msec): min=53, max=1172, avg=487.47, stdev=214.97 00:28:43.952 clat percentiles (msec): 00:28:43.952 | 1.00th=[ 110], 5.00th=[ 201], 10.00th=[ 224], 20.00th=[ 275], 00:28:43.952 | 30.00th=[ 363], 40.00th=[ 405], 50.00th=[ 435], 60.00th=[ 477], 00:28:43.952 | 70.00th=[ 617], 80.00th=[ 718], 90.00th=[ 802], 95.00th=[ 852], 00:28:43.952 | 99.00th=[ 894], 99.50th=[ 911], 99.90th=[ 1036], 99.95th=[ 1036], 00:28:43.952 | 99.99th=[ 1036] 00:28:43.952 bw ( KiB/s): min= 9728, max=64000, per=3.26%, avg=32281.60, stdev=15762.07, samples=20 00:28:43.952 iops : min= 38, max= 250, avg=126.10, stdev=61.57, samples=20 00:28:43.952 lat (msec) : 100=0.76%, 250=14.88%, 500=49.17%, 750=20.09%, 1000=14.80% 00:28:43.952 lat (msec) : 2000=0.30% 00:28:43.952 cpu : usr=0.02%, sys=0.59%, ctx=208, majf=0, minf=4097 00:28:43.952 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:28:43.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.952 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.952 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.952 job9: (groupid=0, jobs=1): err= 0: pid=3496453: Thu Nov 28 12:59:12 2024 00:28:43.952 read: IOPS=250, BW=62.6MiB/s (65.6MB/s)(633MiB/10113msec) 00:28:43.952 slat (usec): min=11, max=648489, avg=3833.42, stdev=20028.59 00:28:43.952 clat (msec): min=10, max=1061, avg=251.49, stdev=212.74 00:28:43.952 lat (msec): min=11, max=1422, avg=255.32, stdev=215.89 00:28:43.952 clat percentiles (msec): 00:28:43.952 | 1.00th=[ 22], 5.00th=[ 63], 10.00th=[ 99], 20.00th=[ 113], 00:28:43.952 | 30.00th=[ 121], 40.00th=[ 129], 50.00th=[ 136], 60.00th=[ 155], 00:28:43.952 | 70.00th=[ 288], 80.00th=[ 435], 90.00th=[ 527], 95.00th=[ 743], 00:28:43.952 | 99.00th=[ 978], 99.50th=[ 986], 99.90th=[ 1062], 99.95th=[ 1062], 00:28:43.952 | 99.99th=[ 1062] 00:28:43.952 bw ( KiB/s): min= 9728, max=143360, per=6.37%, avg=63155.20, stdev=48507.29, samples=20 00:28:43.952 iops : min= 38, max= 560, avg=246.70, stdev=189.48, samples=20 00:28:43.952 lat (msec) : 20=0.83%, 50=3.52%, 100=6.72%, 250=54.48%, 500=21.26% 00:28:43.952 lat (msec) : 750=8.81%, 1000=4.03%, 2000=0.36% 00:28:43.952 cpu : usr=0.07%, sys=0.88%, ctx=426, majf=0, minf=4097 00:28:43.952 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:28:43.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.952 issued rwts: total=2531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.952 job10: (groupid=0, jobs=1): err= 0: pid=3496459: Thu Nov 28 12:59:12 2024 00:28:43.952 read: IOPS=227, BW=56.8MiB/s (59.6MB/s)(576MiB/10137msec) 00:28:43.952 slat (usec): min=12, max=494538, avg=3229.63, stdev=18096.87 00:28:43.952 clat (msec): min=15, max=1115, avg=277.68, stdev=206.77 00:28:43.952 lat (msec): min=15, max=1188, avg=280.91, stdev=210.24 00:28:43.952 clat percentiles (msec): 00:28:43.952 | 1.00th=[ 37], 5.00th=[ 57], 10.00th=[ 71], 20.00th=[ 105], 00:28:43.952 | 30.00th=[ 122], 40.00th=[ 186], 50.00th=[ 241], 60.00th=[ 275], 00:28:43.952 | 70.00th=[ 330], 80.00th=[ 409], 90.00th=[ 575], 95.00th=[ 768], 00:28:43.952 | 99.00th=[ 852], 99.50th=[ 877], 99.90th=[ 894], 99.95th=[ 1116], 00:28:43.952 | 99.99th=[ 1116] 00:28:43.952 bw ( KiB/s): min=15872, max=148992, per=5.79%, avg=57369.60, stdev=38395.68, samples=20 00:28:43.952 iops : min= 62, max= 582, avg=224.10, stdev=149.98, samples=20 00:28:43.952 lat (msec) : 20=0.30%, 50=2.82%, 100=14.84%, 250=35.40%, 500=35.01% 00:28:43.952 lat (msec) : 750=5.47%, 1000=6.07%, 2000=0.09% 00:28:43.952 cpu : usr=0.05%, sys=0.84%, ctx=497, majf=0, minf=4097 00:28:43.952 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:28:43.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:43.952 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:43.952 00:28:43.952 Run status group 0 (all jobs): 00:28:43.952 READ: bw=968MiB/s (1015MB/s), 32.2MiB/s-329MiB/s (33.8MB/s-345MB/s), io=9811MiB (10.3GB), run=10022-10140msec 00:28:43.952 00:28:43.952 Disk stats (read/write): 00:28:43.952 nvme0n1: ios=7550/0, merge=0/0, ticks=1236385/0, in_queue=1236385, util=96.45% 00:28:43.952 nvme10n1: ios=25802/0, merge=0/0, ticks=1226699/0, in_queue=1226699, util=96.53% 00:28:43.952 nvme1n1: ios=12430/0, merge=0/0, ticks=1246275/0, in_queue=1246275, util=96.93% 00:28:43.952 nvme2n1: ios=7253/0, merge=0/0, ticks=1232791/0, in_queue=1232791, util=97.12% 00:28:43.952 nvme3n1: ios=2778/0, merge=0/0, ticks=1237177/0, in_queue=1237177, util=97.22% 00:28:43.952 nvme4n1: ios=3877/0, merge=0/0, ticks=1249919/0, in_queue=1249919, util=97.75% 00:28:43.952 nvme5n1: ios=2540/0, merge=0/0, ticks=1248894/0, in_queue=1248894, util=97.94% 00:28:43.952 nvme6n1: ios=2787/0, merge=0/0, ticks=1255455/0, in_queue=1255455, util=98.17% 00:28:43.952 nvme7n1: ios=2605/0, merge=0/0, ticks=1250243/0, in_queue=1250243, util=98.63% 00:28:43.952 nvme8n1: ios=4998/0, merge=0/0, ticks=1246660/0, in_queue=1246660, util=98.99% 00:28:43.952 nvme9n1: ios=4545/0, merge=0/0, ticks=1252947/0, in_queue=1252947, util=99.17% 00:28:43.952 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:43.952 [global] 00:28:43.952 thread=1 00:28:43.952 invalidate=1 00:28:43.952 rw=randwrite 00:28:43.952 time_based=1 00:28:43.952 runtime=10 00:28:43.952 ioengine=libaio 00:28:43.952 direct=1 00:28:43.952 bs=262144 00:28:43.952 iodepth=64 00:28:43.952 norandommap=1 00:28:43.952 numjobs=1 00:28:43.952 00:28:43.952 [job0] 00:28:43.952 filename=/dev/nvme0n1 00:28:43.952 [job1] 00:28:43.952 filename=/dev/nvme10n1 00:28:43.952 [job2] 00:28:43.952 filename=/dev/nvme1n1 00:28:43.952 [job3] 00:28:43.952 filename=/dev/nvme2n1 00:28:43.952 [job4] 00:28:43.952 filename=/dev/nvme3n1 00:28:43.952 [job5] 00:28:43.952 filename=/dev/nvme4n1 00:28:43.952 [job6] 00:28:43.952 filename=/dev/nvme5n1 00:28:43.952 [job7] 00:28:43.952 filename=/dev/nvme6n1 00:28:43.952 [job8] 00:28:43.952 filename=/dev/nvme7n1 00:28:43.952 [job9] 00:28:43.952 filename=/dev/nvme8n1 00:28:43.952 [job10] 00:28:43.952 filename=/dev/nvme9n1 00:28:43.952 Could not set queue depth (nvme0n1) 00:28:43.952 Could not set queue depth (nvme10n1) 00:28:43.952 Could not set queue depth (nvme1n1) 00:28:43.952 Could not set queue depth (nvme2n1) 00:28:43.953 Could not set queue depth (nvme3n1) 00:28:43.953 Could not set queue depth (nvme4n1) 00:28:43.953 Could not set queue depth (nvme5n1) 00:28:43.953 Could not set queue depth (nvme6n1) 00:28:43.953 Could not set queue depth (nvme7n1) 00:28:43.953 Could not set queue depth (nvme8n1) 00:28:43.953 Could not set queue depth (nvme9n1) 00:28:43.953 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.953 fio-3.35 00:28:43.953 Starting 11 threads 00:28:53.955 00:28:53.955 job0: (groupid=0, jobs=1): err= 0: pid=3498194: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=432, BW=108MiB/s (113MB/s)(1087MiB/10063msec); 0 zone resets 00:28:53.955 slat (usec): min=28, max=22617, avg=1869.66, stdev=4344.90 00:28:53.955 clat (msec): min=2, max=341, avg=146.21, stdev=72.20 00:28:53.955 lat (msec): min=2, max=344, avg=148.08, stdev=73.21 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 33], 5.00th=[ 51], 10.00th=[ 59], 20.00th=[ 72], 00:28:53.955 | 30.00th=[ 87], 40.00th=[ 104], 50.00th=[ 148], 60.00th=[ 182], 00:28:53.955 | 70.00th=[ 194], 80.00th=[ 220], 90.00th=[ 245], 95.00th=[ 255], 00:28:53.955 | 99.00th=[ 296], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:28:53.955 | 99.99th=[ 342] 00:28:53.955 bw ( KiB/s): min=63488, max=255488, per=8.15%, avg=109696.00, stdev=52898.06, samples=20 00:28:53.955 iops : min= 248, max= 998, avg=428.50, stdev=206.63, samples=20 00:28:53.955 lat (msec) : 4=0.02%, 10=0.21%, 20=0.51%, 50=4.14%, 100=34.29% 00:28:53.955 lat (msec) : 250=53.20%, 500=7.64% 00:28:53.955 cpu : usr=1.08%, sys=1.46%, ctx=1853, majf=0, minf=1 00:28:53.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.955 issued rwts: total=0,4348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.955 job1: (groupid=0, jobs=1): err= 0: pid=3498206: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=427, BW=107MiB/s (112MB/s)(1081MiB/10105msec); 0 zone resets 00:28:53.955 slat (usec): min=24, max=342598, avg=2105.07, stdev=7872.83 00:28:53.955 clat (msec): min=5, max=524, avg=147.38, stdev=84.21 00:28:53.955 lat (msec): min=5, max=533, avg=149.48, stdev=85.24 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 13], 5.00th=[ 40], 10.00th=[ 66], 20.00th=[ 68], 00:28:53.955 | 30.00th=[ 70], 40.00th=[ 95], 50.00th=[ 161], 60.00th=[ 186], 00:28:53.955 | 70.00th=[ 194], 80.00th=[ 211], 90.00th=[ 241], 95.00th=[ 264], 00:28:53.955 | 99.00th=[ 430], 99.50th=[ 460], 99.90th=[ 518], 99.95th=[ 523], 00:28:53.955 | 99.99th=[ 523] 00:28:53.955 bw ( KiB/s): min=41984, max=237056, per=8.11%, avg=109081.60, stdev=57734.25, samples=20 00:28:53.955 iops : min= 164, max= 926, avg=426.10, stdev=225.52, samples=20 00:28:53.955 lat (msec) : 10=0.51%, 20=2.47%, 50=2.91%, 100=34.60%, 250=52.29% 00:28:53.955 lat (msec) : 500=6.85%, 750=0.37% 00:28:53.955 cpu : usr=0.99%, sys=1.19%, ctx=1469, majf=0, minf=1 00:28:53.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.955 issued rwts: total=0,4324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.955 job2: (groupid=0, jobs=1): err= 0: pid=3498208: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=432, BW=108MiB/s (113MB/s)(1093MiB/10120msec); 0 zone resets 00:28:53.955 slat (usec): min=18, max=36167, avg=2025.54, stdev=4278.94 00:28:53.955 clat (msec): min=8, max=427, avg=146.03, stdev=72.87 00:28:53.955 lat (msec): min=8, max=427, avg=148.06, stdev=73.58 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 51], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 80], 00:28:53.955 | 30.00th=[ 107], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 138], 00:28:53.955 | 70.00th=[ 180], 80.00th=[ 213], 90.00th=[ 234], 95.00th=[ 284], 00:28:53.955 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 426], 00:28:53.955 | 99.99th=[ 426] 00:28:53.955 bw ( KiB/s): min=46592, max=225280, per=8.20%, avg=110357.15, stdev=49427.72, samples=20 00:28:53.955 iops : min= 182, max= 880, avg=431.05, stdev=193.01, samples=20 00:28:53.955 lat (msec) : 10=0.02%, 20=0.18%, 50=0.80%, 100=24.54%, 250=68.01% 00:28:53.955 lat (msec) : 500=6.45% 00:28:53.955 cpu : usr=1.04%, sys=1.14%, ctx=1343, majf=0, minf=1 00:28:53.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.955 issued rwts: total=0,4373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.955 job3: (groupid=0, jobs=1): err= 0: pid=3498209: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=382, BW=95.6MiB/s (100MB/s)(969MiB/10134msec); 0 zone resets 00:28:53.955 slat (usec): min=19, max=33281, avg=2137.64, stdev=4484.97 00:28:53.955 clat (msec): min=3, max=435, avg=165.17, stdev=65.71 00:28:53.955 lat (msec): min=5, max=435, avg=167.31, stdev=66.12 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 38], 5.00th=[ 87], 10.00th=[ 107], 20.00th=[ 112], 00:28:53.955 | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 159], 60.00th=[ 178], 00:28:53.955 | 70.00th=[ 197], 80.00th=[ 220], 90.00th=[ 239], 95.00th=[ 305], 00:28:53.955 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 418], 99.95th=[ 426], 00:28:53.955 | 99.99th=[ 435] 00:28:53.955 bw ( KiB/s): min=53248, max=163840, per=7.25%, avg=97536.00, stdev=30639.73, samples=20 00:28:53.955 iops : min= 208, max= 640, avg=381.00, stdev=119.69, samples=20 00:28:53.955 lat (msec) : 4=0.03%, 10=0.13%, 20=0.41%, 50=0.52%, 100=5.42% 00:28:53.955 lat (msec) : 250=86.29%, 500=7.20% 00:28:53.955 cpu : usr=0.87%, sys=1.18%, ctx=1378, majf=0, minf=1 00:28:53.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.955 issued rwts: total=0,3874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.955 job4: (groupid=0, jobs=1): err= 0: pid=3498210: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=426, BW=107MiB/s (112MB/s)(1079MiB/10128msec); 0 zone resets 00:28:53.955 slat (usec): min=21, max=37846, avg=1802.70, stdev=4601.42 00:28:53.955 clat (msec): min=3, max=492, avg=148.33, stdev=88.19 00:28:53.955 lat (msec): min=4, max=492, avg=150.13, stdev=89.31 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 56], 20.00th=[ 58], 00:28:53.955 | 30.00th=[ 62], 40.00th=[ 110], 50.00th=[ 159], 60.00th=[ 182], 00:28:53.955 | 70.00th=[ 197], 80.00th=[ 226], 90.00th=[ 251], 95.00th=[ 268], 00:28:53.955 | 99.00th=[ 422], 99.50th=[ 460], 99.90th=[ 485], 99.95th=[ 493], 00:28:53.955 | 99.99th=[ 493] 00:28:53.955 bw ( KiB/s): min=53760, max=276992, per=8.09%, avg=108888.00, stdev=55917.08, samples=20 00:28:53.955 iops : min= 210, max= 1082, avg=425.30, stdev=218.42, samples=20 00:28:53.955 lat (msec) : 4=0.02%, 10=1.18%, 20=1.81%, 50=4.98%, 100=30.44% 00:28:53.955 lat (msec) : 250=50.97%, 500=10.59% 00:28:53.955 cpu : usr=0.99%, sys=1.42%, ctx=1978, majf=0, minf=1 00:28:53.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.955 issued rwts: total=0,4316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.955 job5: (groupid=0, jobs=1): err= 0: pid=3498211: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=338, BW=84.6MiB/s (88.7MB/s)(857MiB/10135msec); 0 zone resets 00:28:53.955 slat (usec): min=23, max=39951, avg=2634.68, stdev=5420.49 00:28:53.955 clat (msec): min=19, max=460, avg=186.46, stdev=68.26 00:28:53.955 lat (msec): min=21, max=460, avg=189.09, stdev=68.94 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 54], 5.00th=[ 96], 10.00th=[ 107], 20.00th=[ 114], 00:28:53.955 | 30.00th=[ 131], 40.00th=[ 174], 50.00th=[ 190], 60.00th=[ 213], 00:28:53.955 | 70.00th=[ 228], 80.00th=[ 241], 90.00th=[ 257], 95.00th=[ 275], 00:28:53.955 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 460], 00:28:53.955 | 99.99th=[ 460] 00:28:53.955 bw ( KiB/s): min=47104, max=148992, per=6.40%, avg=86144.00, stdev=28323.35, samples=20 00:28:53.955 iops : min= 184, max= 582, avg=336.50, stdev=110.64, samples=20 00:28:53.955 lat (msec) : 20=0.03%, 50=0.79%, 100=4.99%, 250=78.83%, 500=15.37% 00:28:53.955 cpu : usr=0.83%, sys=0.97%, ctx=1117, majf=0, minf=1 00:28:53.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:28:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.955 issued rwts: total=0,3429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.955 job6: (groupid=0, jobs=1): err= 0: pid=3498212: Thu Nov 28 12:59:23 2024 00:28:53.955 write: IOPS=396, BW=99.0MiB/s (104MB/s)(996MiB/10059msec); 0 zone resets 00:28:53.955 slat (usec): min=24, max=70694, avg=2119.39, stdev=4676.78 00:28:53.955 clat (msec): min=7, max=478, avg=158.96, stdev=65.12 00:28:53.955 lat (msec): min=7, max=478, avg=161.08, stdev=65.93 00:28:53.955 clat percentiles (msec): 00:28:53.955 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 85], 20.00th=[ 110], 00:28:53.955 | 30.00th=[ 118], 40.00th=[ 142], 50.00th=[ 165], 60.00th=[ 184], 00:28:53.956 | 70.00th=[ 192], 80.00th=[ 205], 90.00th=[ 230], 95.00th=[ 243], 00:28:53.956 | 99.00th=[ 342], 99.50th=[ 409], 99.90th=[ 464], 99.95th=[ 472], 00:28:53.956 | 99.99th=[ 481] 00:28:53.956 bw ( KiB/s): min=62976, max=173056, per=7.46%, avg=100403.20, stdev=31489.11, samples=20 00:28:53.956 iops : min= 246, max= 676, avg=392.20, stdev=123.00, samples=20 00:28:53.956 lat (msec) : 10=0.03%, 20=0.53%, 50=5.60%, 100=10.46%, 250=79.27% 00:28:53.956 lat (msec) : 500=4.12% 00:28:53.956 cpu : usr=1.00%, sys=1.35%, ctx=1564, majf=0, minf=1 00:28:53.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:53.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.956 issued rwts: total=0,3985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.956 job7: (groupid=0, jobs=1): err= 0: pid=3498213: Thu Nov 28 12:59:23 2024 00:28:53.956 write: IOPS=616, BW=154MiB/s (162MB/s)(1551MiB/10065msec); 0 zone resets 00:28:53.956 slat (usec): min=25, max=41411, avg=1365.63, stdev=3180.84 00:28:53.956 clat (msec): min=5, max=460, avg=102.39, stdev=60.09 00:28:53.956 lat (msec): min=5, max=466, avg=103.75, stdev=60.80 00:28:53.956 clat percentiles (msec): 00:28:53.956 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 61], 20.00th=[ 63], 00:28:53.956 | 30.00th=[ 67], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 102], 00:28:53.956 | 70.00th=[ 110], 80.00th=[ 133], 90.00th=[ 194], 95.00th=[ 228], 00:28:53.956 | 99.00th=[ 284], 99.50th=[ 405], 99.90th=[ 443], 99.95th=[ 451], 00:28:53.956 | 99.99th=[ 460] 00:28:53.956 bw ( KiB/s): min=71680, max=257536, per=11.69%, avg=157235.20, stdev=63344.39, samples=20 00:28:53.956 iops : min= 280, max= 1006, avg=614.20, stdev=247.44, samples=20 00:28:53.956 lat (msec) : 10=0.61%, 20=1.18%, 50=5.45%, 100=52.20%, 250=38.31% 00:28:53.956 lat (msec) : 500=2.26% 00:28:53.956 cpu : usr=1.34%, sys=2.30%, ctx=2317, majf=0, minf=1 00:28:53.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:53.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.956 issued rwts: total=0,6205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.956 job8: (groupid=0, jobs=1): err= 0: pid=3498214: Thu Nov 28 12:59:23 2024 00:28:53.956 write: IOPS=678, BW=170MiB/s (178MB/s)(1714MiB/10107msec); 0 zone resets 00:28:53.956 slat (usec): min=16, max=52348, avg=1419.36, stdev=3037.04 00:28:53.956 clat (msec): min=30, max=310, avg=92.89, stdev=54.95 00:28:53.956 lat (msec): min=32, max=310, avg=94.31, stdev=55.68 00:28:53.956 clat percentiles (msec): 00:28:53.956 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 60], 00:28:53.956 | 30.00th=[ 64], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 77], 00:28:53.956 | 70.00th=[ 81], 80.00th=[ 157], 90.00th=[ 190], 95.00th=[ 201], 00:28:53.956 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 309], 00:28:53.956 | 99.99th=[ 313] 00:28:53.956 bw ( KiB/s): min=74901, max=354816, per=12.92%, avg=173831.45, stdev=87103.10, samples=20 00:28:53.956 iops : min= 292, max= 1386, avg=679.00, stdev=340.28, samples=20 00:28:53.956 lat (msec) : 50=14.15%, 100=62.08%, 250=22.50%, 500=1.27% 00:28:53.956 cpu : usr=1.60%, sys=2.22%, ctx=1708, majf=0, minf=1 00:28:53.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:53.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.956 issued rwts: total=0,6854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.956 job9: (groupid=0, jobs=1): err= 0: pid=3498215: Thu Nov 28 12:59:23 2024 00:28:53.956 write: IOPS=590, BW=148MiB/s (155MB/s)(1492MiB/10106msec); 0 zone resets 00:28:53.956 slat (usec): min=24, max=27434, avg=1653.09, stdev=3740.92 00:28:53.956 clat (msec): min=5, max=298, avg=106.71, stdev=74.28 00:28:53.956 lat (msec): min=6, max=298, avg=108.36, stdev=75.36 00:28:53.956 clat percentiles (msec): 00:28:53.956 | 1.00th=[ 43], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 47], 00:28:53.956 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 82], 00:28:53.956 | 70.00th=[ 165], 80.00th=[ 190], 90.00th=[ 215], 95.00th=[ 255], 00:28:53.956 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:28:53.956 | 99.99th=[ 300] 00:28:53.956 bw ( KiB/s): min=61440, max=344064, per=11.23%, avg=151142.40, stdev=104863.09, samples=20 00:28:53.956 iops : min= 240, max= 1344, avg=590.40, stdev=409.62, samples=20 00:28:53.956 lat (msec) : 10=0.02%, 20=0.05%, 50=35.73%, 100=25.66%, 250=32.93% 00:28:53.956 lat (msec) : 500=5.61% 00:28:53.956 cpu : usr=1.41%, sys=1.87%, ctx=1494, majf=0, minf=1 00:28:53.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:53.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.956 issued rwts: total=0,5967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.956 job10: (groupid=0, jobs=1): err= 0: pid=3498217: Thu Nov 28 12:59:23 2024 00:28:53.956 write: IOPS=555, BW=139MiB/s (146MB/s)(1398MiB/10064msec); 0 zone resets 00:28:53.956 slat (usec): min=17, max=21522, avg=1783.16, stdev=3595.56 00:28:53.956 clat (msec): min=12, max=271, avg=113.35, stdev=57.50 00:28:53.956 lat (msec): min=12, max=271, avg=115.13, stdev=58.34 00:28:53.956 clat percentiles (msec): 00:28:53.956 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 65], 20.00th=[ 68], 00:28:53.956 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 103], 60.00th=[ 110], 00:28:53.956 | 70.00th=[ 122], 80.00th=[ 169], 90.00th=[ 218], 95.00th=[ 232], 00:28:53.956 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 271], 00:28:53.956 | 99.99th=[ 271] 00:28:53.956 bw ( KiB/s): min=70144, max=253952, per=10.52%, avg=141568.00, stdev=62829.26, samples=20 00:28:53.956 iops : min= 274, max= 992, avg=553.00, stdev=245.43, samples=20 00:28:53.956 lat (msec) : 20=0.14%, 50=5.27%, 100=43.50%, 250=50.30%, 500=0.79% 00:28:53.956 cpu : usr=1.28%, sys=1.81%, ctx=1363, majf=0, minf=1 00:28:53.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:53.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:53.956 issued rwts: total=0,5593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:53.956 00:28:53.956 Run status group 0 (all jobs): 00:28:53.956 WRITE: bw=1314MiB/s (1378MB/s), 84.6MiB/s-170MiB/s (88.7MB/s-178MB/s), io=13.0GiB (14.0GB), run=10059-10135msec 00:28:53.956 00:28:53.956 Disk stats (read/write): 00:28:53.956 nvme0n1: ios=49/8355, merge=0/0, ticks=174/1204163, in_queue=1204337, util=97.78% 00:28:53.956 nvme10n1: ios=46/8611, merge=0/0, ticks=3398/1179818, in_queue=1183216, util=100.00% 00:28:53.956 nvme1n1: ios=0/8685, merge=0/0, ticks=0/1225900, in_queue=1225900, util=96.95% 00:28:53.956 nvme2n1: ios=40/7669, merge=0/0, ticks=2017/1226037, in_queue=1228054, util=100.00% 00:28:53.956 nvme3n1: ios=5/8564, merge=0/0, ticks=210/1233345, in_queue=1233555, util=97.33% 00:28:53.956 nvme4n1: ios=0/6775, merge=0/0, ticks=0/1223081, in_queue=1223081, util=97.73% 00:28:53.956 nvme5n1: ios=43/7623, merge=0/0, ticks=1299/1199742, in_queue=1201041, util=100.00% 00:28:53.956 nvme6n1: ios=41/12078, merge=0/0, ticks=1687/1197685, in_queue=1199372, util=100.00% 00:28:53.956 nvme7n1: ios=39/13666, merge=0/0, ticks=1650/1226607, in_queue=1228257, util=100.00% 00:28:53.956 nvme8n1: ios=0/11894, merge=0/0, ticks=0/1226893, in_queue=1226893, util=98.91% 00:28:53.956 nvme9n1: ios=0/10927, merge=0/0, ticks=0/1191129, in_queue=1191129, util=99.06% 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:53.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.956 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:54.217 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:54.217 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:54.217 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:54.217 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:54.217 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:54.478 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:54.739 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:54.739 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:55.000 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:55.000 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:55.000 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:55.000 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:55.000 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:55.000 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:55.000 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:55.259 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:55.520 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:55.520 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:55.781 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:55.781 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:56.042 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:56.042 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:56.303 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:56.303 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:56.565 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:56.565 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.565 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.565 rmmod nvme_tcp 00:28:56.826 rmmod nvme_fabrics 00:28:56.826 rmmod nvme_keyring 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3487947 ']' 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3487947 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3487947 ']' 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3487947 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3487947 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3487947' 00:28:56.826 killing process with pid 3487947 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3487947 00:28:56.826 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3487947 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.086 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.087 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.004 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.004 00:28:59.004 real 1m18.121s 00:28:59.004 user 4m59.157s 00:28:59.004 sys 0m17.325s 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:59.265 ************************************ 00:28:59.265 END TEST nvmf_multiconnection 00:28:59.265 ************************************ 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:59.265 ************************************ 00:28:59.265 START TEST nvmf_initiator_timeout 00:28:59.265 ************************************ 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:59.265 * Looking for test storage... 00:28:59.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:28:59.265 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.526 --rc genhtml_branch_coverage=1 00:28:59.526 --rc genhtml_function_coverage=1 00:28:59.526 --rc genhtml_legend=1 00:28:59.526 --rc geninfo_all_blocks=1 00:28:59.526 --rc geninfo_unexecuted_blocks=1 00:28:59.526 00:28:59.526 ' 00:28:59.526 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.527 --rc genhtml_branch_coverage=1 00:28:59.527 --rc genhtml_function_coverage=1 00:28:59.527 --rc genhtml_legend=1 00:28:59.527 --rc geninfo_all_blocks=1 00:28:59.527 --rc geninfo_unexecuted_blocks=1 00:28:59.527 00:28:59.527 ' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.527 --rc genhtml_branch_coverage=1 00:28:59.527 --rc genhtml_function_coverage=1 00:28:59.527 --rc genhtml_legend=1 00:28:59.527 --rc geninfo_all_blocks=1 00:28:59.527 --rc geninfo_unexecuted_blocks=1 00:28:59.527 00:28:59.527 ' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.527 --rc genhtml_branch_coverage=1 00:28:59.527 --rc genhtml_function_coverage=1 00:28:59.527 --rc genhtml_legend=1 00:28:59.527 --rc geninfo_all_blocks=1 00:28:59.527 --rc geninfo_unexecuted_blocks=1 00:28:59.527 00:28:59.527 ' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.527 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.528 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.528 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.528 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.528 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.675 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:07.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:07.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:07.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:07.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:29:07.676 00:29:07.676 --- 10.0.0.2 ping statistics --- 00:29:07.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.676 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:29:07.676 00:29:07.676 --- 10.0.0.1 ping statistics --- 00:29:07.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.676 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3504509 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3504509 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3504509 ']' 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.676 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.676 [2024-11-28 12:59:37.026294] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:29:07.676 [2024-11-28 12:59:37.026360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.676 [2024-11-28 12:59:37.170966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:07.676 [2024-11-28 12:59:37.229050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.676 [2024-11-28 12:59:37.257405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.676 [2024-11-28 12:59:37.257450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.677 [2024-11-28 12:59:37.257459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.677 [2024-11-28 12:59:37.257471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.677 [2024-11-28 12:59:37.257477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.677 [2024-11-28 12:59:37.259526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.677 [2024-11-28 12:59:37.259686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.677 [2024-11-28 12:59:37.259819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.677 [2024-11-28 12:59:37.259819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.937 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.938 Malloc0 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.938 Delay0 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.938 [2024-11-28 12:59:37.964031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.938 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.938 [2024-11-28 12:59:38.004412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.938 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.938 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:09.851 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:29:09.851 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:29:09.851 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:09.851 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:09.851 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3505547 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:29:11.899 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:29:11.899 [global] 00:29:11.899 thread=1 00:29:11.899 invalidate=1 00:29:11.899 rw=write 00:29:11.899 time_based=1 00:29:11.899 runtime=60 00:29:11.899 ioengine=libaio 00:29:11.899 direct=1 00:29:11.899 bs=4096 00:29:11.899 iodepth=1 00:29:11.899 norandommap=0 00:29:11.899 numjobs=1 00:29:11.899 00:29:11.899 verify_dump=1 00:29:11.899 verify_backlog=512 00:29:11.899 verify_state_save=0 00:29:11.899 do_verify=1 00:29:11.899 verify=crc32c-intel 00:29:11.899 [job0] 00:29:11.899 filename=/dev/nvme0n1 00:29:11.899 Could not set queue depth (nvme0n1) 00:29:11.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:11.899 fio-3.35 00:29:11.899 Starting 1 thread 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 true 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 true 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 true 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:15.204 true 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.204 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.781 true 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.781 true 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.781 true 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.781 true 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:29:17.781 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3505547 00:30:14.059 00:30:14.059 job0: (groupid=0, jobs=1): err= 0: pid=3505714: Thu Nov 28 13:00:42 2024 00:30:14.059 read: IOPS=99, BW=398KiB/s (408kB/s)(23.3MiB/60002msec) 00:30:14.059 slat (usec): min=6, max=13914, avg=30.90, stdev=217.83 00:30:14.059 clat (usec): min=378, max=42084, avg=2353.85, stdev=7323.36 00:30:14.059 lat (usec): min=404, max=42110, avg=2384.75, stdev=7325.83 00:30:14.059 clat percentiles (usec): 00:30:14.059 | 1.00th=[ 750], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 955], 00:30:14.059 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1029], 00:30:14.059 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:30:14.059 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:14.059 | 99.99th=[42206] 00:30:14.059 write: IOPS=102, BW=410KiB/s (419kB/s)(24.0MiB/60002msec); 0 zone resets 00:30:14.059 slat (nsec): min=9253, max=70142, avg=30632.99, stdev=10502.76 00:30:14.059 clat (usec): min=180, max=42129k, avg=7399.79, stdev=537462.58 00:30:14.059 lat (usec): min=191, max=42129k, avg=7430.42, stdev=537462.65 00:30:14.059 clat percentiles (usec): 00:30:14.059 | 1.00th=[ 269], 5.00th=[ 351], 10.00th=[ 404], 00:30:14.059 | 20.00th=[ 441], 30.00th=[ 494], 40.00th=[ 523], 00:30:14.059 | 50.00th=[ 545], 60.00th=[ 570], 70.00th=[ 611], 00:30:14.059 | 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:30:14.059 | 99.00th=[ 783], 99.50th=[ 816], 99.90th=[ 889], 00:30:14.059 | 99.95th=[ 930], 99.99th=[17112761] 00:30:14.059 bw ( KiB/s): min= 328, max= 4096, per=100.00%, avg=2457.60, stdev=1228.01, samples=20 00:30:14.059 iops : min= 82, max= 1024, avg=614.40, stdev=307.00, samples=20 00:30:14.059 lat (usec) : 250=0.43%, 500=15.76%, 750=34.18%, 1000=20.00% 00:30:14.059 lat (msec) : 2=27.99%, 50=1.63%, >=2000=0.01% 00:30:14.059 cpu : usr=0.35%, sys=0.66%, ctx=12124, majf=0, minf=1 00:30:14.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.059 issued rwts: total=5976,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:14.059 00:30:14.059 Run status group 0 (all jobs): 00:30:14.059 READ: bw=398KiB/s (408kB/s), 398KiB/s-398KiB/s (408kB/s-408kB/s), io=23.3MiB (24.5MB), run=60002-60002msec 00:30:14.059 WRITE: bw=410KiB/s (419kB/s), 410KiB/s-410KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60002-60002msec 00:30:14.059 00:30:14.059 Disk stats (read/write): 00:30:14.059 nvme0n1: ios=6037/6144, merge=0/0, ticks=14372/3010, in_queue=17382, util=99.98% 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:14.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:30:14.060 nvmf hotplug test: fio successful as expected 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.060 rmmod nvme_tcp 00:30:14.060 rmmod nvme_fabrics 00:30:14.060 rmmod nvme_keyring 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3504509 ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3504509 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3504509 ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3504509 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504509 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504509' 00:30:14.060 killing process with pid 3504509 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3504509 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3504509 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.060 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.631 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.631 00:30:14.631 real 1m15.445s 00:30:14.631 user 4m38.014s 00:30:14.631 sys 0m8.103s 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:14.632 ************************************ 00:30:14.632 END TEST nvmf_initiator_timeout 00:30:14.632 ************************************ 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.632 13:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:30:22.777 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:22.778 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:22.778 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:22.778 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:22.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:22.778 ************************************ 00:30:22.778 START TEST nvmf_perf_adq 00:30:22.778 ************************************ 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:22.778 * Looking for test storage... 00:30:22.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:30:22.778 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.778 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.779 --rc genhtml_branch_coverage=1 00:30:22.779 --rc genhtml_function_coverage=1 00:30:22.779 --rc genhtml_legend=1 00:30:22.779 --rc geninfo_all_blocks=1 00:30:22.779 --rc geninfo_unexecuted_blocks=1 00:30:22.779 00:30:22.779 ' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.779 --rc genhtml_branch_coverage=1 00:30:22.779 --rc genhtml_function_coverage=1 00:30:22.779 --rc genhtml_legend=1 00:30:22.779 --rc geninfo_all_blocks=1 00:30:22.779 --rc geninfo_unexecuted_blocks=1 00:30:22.779 00:30:22.779 ' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.779 --rc genhtml_branch_coverage=1 00:30:22.779 --rc genhtml_function_coverage=1 00:30:22.779 --rc genhtml_legend=1 00:30:22.779 --rc geninfo_all_blocks=1 00:30:22.779 --rc geninfo_unexecuted_blocks=1 00:30:22.779 00:30:22.779 ' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:22.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.779 --rc genhtml_branch_coverage=1 00:30:22.779 --rc genhtml_function_coverage=1 00:30:22.779 --rc genhtml_legend=1 00:30:22.779 --rc geninfo_all_blocks=1 00:30:22.779 --rc geninfo_unexecuted_blocks=1 00:30:22.779 00:30:22.779 ' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:22.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:22.779 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:29.366 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:29.366 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:29.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:29.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:29.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:30:29.367 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:30:31.280 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:33.208 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:38.505 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:38.505 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:38.505 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:38.505 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.505 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:30:38.506 00:30:38.506 --- 10.0.0.2 ping statistics --- 00:30:38.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.506 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:30:38.506 00:30:38.506 --- 10.0.0.1 ping statistics --- 00:30:38.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.506 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3527273 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3527273 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3527273 ']' 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.506 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:38.506 [2024-11-28 13:01:08.340635] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:30:38.506 [2024-11-28 13:01:08.340703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.506 [2024-11-28 13:01:08.485018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:38.506 [2024-11-28 13:01:08.545069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:38.506 [2024-11-28 13:01:08.573032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.506 [2024-11-28 13:01:08.573075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.506 [2024-11-28 13:01:08.573083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.506 [2024-11-28 13:01:08.573091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.506 [2024-11-28 13:01:08.573097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.506 [2024-11-28 13:01:08.575263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.506 [2024-11-28 13:01:08.575434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.506 [2024-11-28 13:01:08.575584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.506 [2024-11-28 13:01:08.575585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.077 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.077 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:30:39.077 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.077 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.077 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 [2024-11-28 13:01:09.358203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 Malloc1 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.338 [2024-11-28 13:01:09.434901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3527394 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:30:39.338 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:41.884 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:30:41.884 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.884 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:41.884 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.884 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:30:41.884 "tick_rate": 2394400000, 00:30:41.884 "poll_groups": [ 00:30:41.884 { 00:30:41.884 "name": "nvmf_tgt_poll_group_000", 00:30:41.884 "admin_qpairs": 1, 00:30:41.884 "io_qpairs": 1, 00:30:41.884 "current_admin_qpairs": 1, 00:30:41.884 "current_io_qpairs": 1, 00:30:41.884 "pending_bdev_io": 0, 00:30:41.884 "completed_nvme_io": 14968, 00:30:41.884 "transports": [ 00:30:41.884 { 00:30:41.884 "trtype": "TCP" 00:30:41.884 } 00:30:41.884 ] 00:30:41.884 }, 00:30:41.884 { 00:30:41.884 "name": "nvmf_tgt_poll_group_001", 00:30:41.884 "admin_qpairs": 0, 00:30:41.884 "io_qpairs": 1, 00:30:41.884 "current_admin_qpairs": 0, 00:30:41.884 "current_io_qpairs": 1, 00:30:41.884 "pending_bdev_io": 0, 00:30:41.884 "completed_nvme_io": 15558, 00:30:41.884 "transports": [ 00:30:41.884 { 00:30:41.884 "trtype": "TCP" 00:30:41.884 } 00:30:41.884 ] 00:30:41.885 }, 00:30:41.885 { 00:30:41.885 "name": "nvmf_tgt_poll_group_002", 00:30:41.885 "admin_qpairs": 0, 00:30:41.885 "io_qpairs": 1, 00:30:41.885 "current_admin_qpairs": 0, 00:30:41.885 "current_io_qpairs": 1, 00:30:41.885 "pending_bdev_io": 0, 00:30:41.885 "completed_nvme_io": 16714, 00:30:41.885 "transports": [ 00:30:41.885 { 00:30:41.885 "trtype": "TCP" 00:30:41.885 } 00:30:41.885 ] 00:30:41.885 }, 00:30:41.885 { 00:30:41.885 "name": "nvmf_tgt_poll_group_003", 00:30:41.885 "admin_qpairs": 0, 00:30:41.885 "io_qpairs": 1, 00:30:41.885 "current_admin_qpairs": 0, 00:30:41.885 "current_io_qpairs": 1, 00:30:41.885 "pending_bdev_io": 0, 00:30:41.885 "completed_nvme_io": 15199, 00:30:41.885 "transports": [ 00:30:41.885 { 00:30:41.885 "trtype": "TCP" 00:30:41.885 } 00:30:41.885 ] 00:30:41.885 } 00:30:41.885 ] 00:30:41.885 }' 00:30:41.885 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:41.885 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:30:41.885 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:30:41.885 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:30:41.885 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3527394 00:30:50.013 Initializing NVMe Controllers 00:30:50.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:50.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:50.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:50.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:50.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:50.014 Initialization complete. Launching workers. 00:30:50.014 ======================================================== 00:30:50.014 Latency(us) 00:30:50.014 Device Information : IOPS MiB/s Average min max 00:30:50.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12188.50 47.61 5250.85 1249.45 10947.69 00:30:50.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12967.10 50.65 4935.60 1216.79 12889.33 00:30:50.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13079.10 51.09 4893.11 1290.06 44926.71 00:30:50.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12631.50 49.34 5066.36 1279.68 14766.60 00:30:50.014 ======================================================== 00:30:50.014 Total : 50866.19 198.70 5032.69 1216.79 44926.71 00:30:50.014 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.014 rmmod nvme_tcp 00:30:50.014 rmmod nvme_fabrics 00:30:50.014 rmmod nvme_keyring 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3527273 ']' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3527273 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3527273 ']' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3527273 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3527273 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3527273' 00:30:50.014 killing process with pid 3527273 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3527273 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3527273 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.014 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.927 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.927 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:30:51.927 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:30:51.927 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:30:53.841 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:55.753 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.048 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:01.049 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:01.049 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:01.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:01.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.748 ms 00:31:01.049 00:31:01.049 --- 10.0.0.2 ping statistics --- 00:31:01.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.049 rtt min/avg/max/mdev = 0.748/0.748/0.748/0.000 ms 00:31:01.049 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:31:01.049 00:31:01.049 --- 10.0.0.1 ping statistics --- 00:31:01.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.050 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:31:01.050 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:31:01.050 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:31:01.050 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:31:01.050 net.core.busy_poll = 1 00:31:01.050 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:31:01.050 net.core.busy_read = 1 00:31:01.050 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:31:01.050 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3531927 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3531927 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3531927 ']' 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.311 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.311 [2024-11-28 13:01:31.329248] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:01.311 [2024-11-28 13:01:31.329320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.572 [2024-11-28 13:01:31.473513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:01.572 [2024-11-28 13:01:31.533430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:01.572 [2024-11-28 13:01:31.561652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.572 [2024-11-28 13:01:31.561697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.572 [2024-11-28 13:01:31.561705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.572 [2024-11-28 13:01:31.561712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.572 [2024-11-28 13:01:31.561719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.572 [2024-11-28 13:01:31.563929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.572 [2024-11-28 13:01:31.564069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.572 [2024-11-28 13:01:31.564210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.572 [2024-11-28 13:01:31.564255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.144 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.404 [2024-11-28 13:01:32.346576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.404 Malloc1 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:02.404 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.405 [2024-11-28 13:01:32.421856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3532125 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:31:02.405 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:04.318 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:31:04.318 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.318 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:31:04.579 "tick_rate": 2394400000, 00:31:04.579 "poll_groups": [ 00:31:04.579 { 00:31:04.579 "name": "nvmf_tgt_poll_group_000", 00:31:04.579 "admin_qpairs": 1, 00:31:04.579 "io_qpairs": 3, 00:31:04.579 "current_admin_qpairs": 1, 00:31:04.579 "current_io_qpairs": 3, 00:31:04.579 "pending_bdev_io": 0, 00:31:04.579 "completed_nvme_io": 24694, 00:31:04.579 "transports": [ 00:31:04.579 { 00:31:04.579 "trtype": "TCP" 00:31:04.579 } 00:31:04.579 ] 00:31:04.579 }, 00:31:04.579 { 00:31:04.579 "name": "nvmf_tgt_poll_group_001", 00:31:04.579 "admin_qpairs": 0, 00:31:04.579 "io_qpairs": 1, 00:31:04.579 "current_admin_qpairs": 0, 00:31:04.579 "current_io_qpairs": 1, 00:31:04.579 "pending_bdev_io": 0, 00:31:04.579 "completed_nvme_io": 26930, 00:31:04.579 "transports": [ 00:31:04.579 { 00:31:04.579 "trtype": "TCP" 00:31:04.579 } 00:31:04.579 ] 00:31:04.579 }, 00:31:04.579 { 00:31:04.579 "name": "nvmf_tgt_poll_group_002", 00:31:04.579 "admin_qpairs": 0, 00:31:04.579 "io_qpairs": 0, 00:31:04.579 "current_admin_qpairs": 0, 00:31:04.579 "current_io_qpairs": 0, 00:31:04.579 "pending_bdev_io": 0, 00:31:04.579 "completed_nvme_io": 0, 00:31:04.579 "transports": [ 00:31:04.579 { 00:31:04.579 "trtype": "TCP" 00:31:04.579 } 00:31:04.579 ] 00:31:04.579 }, 00:31:04.579 { 00:31:04.579 "name": "nvmf_tgt_poll_group_003", 00:31:04.579 "admin_qpairs": 0, 00:31:04.579 "io_qpairs": 0, 00:31:04.579 "current_admin_qpairs": 0, 00:31:04.579 "current_io_qpairs": 0, 00:31:04.579 "pending_bdev_io": 0, 00:31:04.579 "completed_nvme_io": 0, 00:31:04.579 "transports": [ 00:31:04.579 { 00:31:04.579 "trtype": "TCP" 00:31:04.579 } 00:31:04.579 ] 00:31:04.579 } 00:31:04.579 ] 00:31:04.579 }' 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:31:04.579 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3532125 00:31:12.892 Initializing NVMe Controllers 00:31:12.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:12.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:12.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:12.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:12.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:12.892 Initialization complete. Launching workers. 00:31:12.892 ======================================================== 00:31:12.892 Latency(us) 00:31:12.892 Device Information : IOPS MiB/s Average min max 00:31:12.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6876.20 26.86 9309.02 1421.05 59223.19 00:31:12.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6175.10 24.12 10364.67 1373.65 62816.00 00:31:12.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6085.40 23.77 10549.36 1356.91 60208.17 00:31:12.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 19066.39 74.48 3356.33 936.37 47289.61 00:31:12.892 ======================================================== 00:31:12.892 Total : 38203.09 149.23 6706.36 936.37 62816.00 00:31:12.892 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.892 rmmod nvme_tcp 00:31:12.892 rmmod nvme_fabrics 00:31:12.892 rmmod nvme_keyring 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3531927 ']' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3531927 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3531927 ']' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3531927 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3531927 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3531927' 00:31:12.892 killing process with pid 3531927 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3531927 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3531927 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.892 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:31:16.187 00:31:16.187 real 0m54.179s 00:31:16.187 user 2m49.320s 00:31:16.187 sys 0m11.503s 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:16.187 ************************************ 00:31:16.187 END TEST nvmf_perf_adq 00:31:16.187 ************************************ 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:16.187 ************************************ 00:31:16.187 START TEST nvmf_shutdown 00:31:16.187 ************************************ 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:16.187 * Looking for test storage... 00:31:16.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:16.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.187 --rc genhtml_branch_coverage=1 00:31:16.187 --rc genhtml_function_coverage=1 00:31:16.187 --rc genhtml_legend=1 00:31:16.187 --rc geninfo_all_blocks=1 00:31:16.187 --rc geninfo_unexecuted_blocks=1 00:31:16.187 00:31:16.187 ' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:16.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.187 --rc genhtml_branch_coverage=1 00:31:16.187 --rc genhtml_function_coverage=1 00:31:16.187 --rc genhtml_legend=1 00:31:16.187 --rc geninfo_all_blocks=1 00:31:16.187 --rc geninfo_unexecuted_blocks=1 00:31:16.187 00:31:16.187 ' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:16.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.187 --rc genhtml_branch_coverage=1 00:31:16.187 --rc genhtml_function_coverage=1 00:31:16.187 --rc genhtml_legend=1 00:31:16.187 --rc geninfo_all_blocks=1 00:31:16.187 --rc geninfo_unexecuted_blocks=1 00:31:16.187 00:31:16.187 ' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:16.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.187 --rc genhtml_branch_coverage=1 00:31:16.187 --rc genhtml_function_coverage=1 00:31:16.187 --rc genhtml_legend=1 00:31:16.187 --rc geninfo_all_blocks=1 00:31:16.187 --rc geninfo_unexecuted_blocks=1 00:31:16.187 00:31:16.187 ' 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.187 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:16.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:16.449 ************************************ 00:31:16.449 START TEST nvmf_shutdown_tc1 00:31:16.449 ************************************ 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.449 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:24.590 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:24.591 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:24.591 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:24.591 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:24.591 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:24.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:31:24.591 00:31:24.591 --- 10.0.0.2 ping statistics --- 00:31:24.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.591 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:31:24.591 00:31:24.591 --- 10.0.0.1 ping statistics --- 00:31:24.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.591 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3538587 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3538587 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3538587 ']' 00:31:24.591 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.592 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.592 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.592 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.592 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:24.592 [2024-11-28 13:01:54.045921] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:24.592 [2024-11-28 13:01:54.045990] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.592 [2024-11-28 13:01:54.192689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:24.592 [2024-11-28 13:01:54.252788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:24.592 [2024-11-28 13:01:54.280406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.592 [2024-11-28 13:01:54.280451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.592 [2024-11-28 13:01:54.280460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.592 [2024-11-28 13:01:54.280467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.592 [2024-11-28 13:01:54.280473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.592 [2024-11-28 13:01:54.282393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:24.592 [2024-11-28 13:01:54.282553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.592 [2024-11-28 13:01:54.282714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.592 [2024-11-28 13:01:54.282714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:24.853 [2024-11-28 13:01:54.918757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:24.853 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:25.113 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:25.114 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:25.114 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:25.114 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:25.114 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:25.114 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.114 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:25.114 Malloc1 00:31:25.114 [2024-11-28 13:01:55.048398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.114 Malloc2 00:31:25.114 Malloc3 00:31:25.114 Malloc4 00:31:25.114 Malloc5 00:31:25.373 Malloc6 00:31:25.373 Malloc7 00:31:25.373 Malloc8 00:31:25.373 Malloc9 00:31:25.373 Malloc10 00:31:25.373 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.373 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:25.373 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:25.373 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3538971 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3538971 /var/tmp/bdevperf.sock 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3538971 ']' 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:25.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.634 { 00:31:25.634 "params": { 00:31:25.634 "name": "Nvme$subsystem", 00:31:25.634 "trtype": "$TEST_TRANSPORT", 00:31:25.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.634 "adrfam": "ipv4", 00:31:25.634 "trsvcid": "$NVMF_PORT", 00:31:25.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.634 "hdgst": ${hdgst:-false}, 00:31:25.634 "ddgst": ${ddgst:-false} 00:31:25.634 }, 00:31:25.634 "method": "bdev_nvme_attach_controller" 00:31:25.634 } 00:31:25.634 EOF 00:31:25.634 )") 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.634 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.634 { 00:31:25.634 "params": { 00:31:25.634 "name": "Nvme$subsystem", 00:31:25.634 "trtype": "$TEST_TRANSPORT", 00:31:25.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.634 "adrfam": "ipv4", 00:31:25.634 "trsvcid": "$NVMF_PORT", 00:31:25.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 [2024-11-28 13:01:55.563925] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:25.635 [2024-11-28 13:01:55.563997] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.635 { 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme$subsystem", 00:31:25.635 "trtype": "$TEST_TRANSPORT", 00:31:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "$NVMF_PORT", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.635 "hdgst": ${hdgst:-false}, 00:31:25.635 "ddgst": ${ddgst:-false} 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 } 00:31:25.635 EOF 00:31:25.635 )") 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:31:25.635 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme1", 00:31:25.635 "trtype": "tcp", 00:31:25.635 "traddr": "10.0.0.2", 00:31:25.635 "adrfam": "ipv4", 00:31:25.635 "trsvcid": "4420", 00:31:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.635 "hdgst": false, 00:31:25.635 "ddgst": false 00:31:25.635 }, 00:31:25.635 "method": "bdev_nvme_attach_controller" 00:31:25.635 },{ 00:31:25.635 "params": { 00:31:25.635 "name": "Nvme2", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme3", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme4", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme5", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme6", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme7", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme8", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme9", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 },{ 00:31:25.636 "params": { 00:31:25.636 "name": "Nvme10", 00:31:25.636 "trtype": "tcp", 00:31:25.636 "traddr": "10.0.0.2", 00:31:25.636 "adrfam": "ipv4", 00:31:25.636 "trsvcid": "4420", 00:31:25.636 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:25.636 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:25.636 "hdgst": false, 00:31:25.636 "ddgst": false 00:31:25.636 }, 00:31:25.636 "method": "bdev_nvme_attach_controller" 00:31:25.636 }' 00:31:25.636 [2024-11-28 13:01:55.702561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:25.896 [2024-11-28 13:01:55.763441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.896 [2024-11-28 13:01:55.791910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3538971 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:31:27.277 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:31:28.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3538971 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3538587 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.218 "params": { 00:31:28.218 "name": "Nvme$subsystem", 00:31:28.218 "trtype": "$TEST_TRANSPORT", 00:31:28.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.218 "adrfam": "ipv4", 00:31:28.218 "trsvcid": "$NVMF_PORT", 00:31:28.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.218 "hdgst": ${hdgst:-false}, 00:31:28.218 "ddgst": ${ddgst:-false} 00:31:28.218 }, 00:31:28.218 "method": "bdev_nvme_attach_controller" 00:31:28.218 } 00:31:28.218 EOF 00:31:28.218 )") 00:31:28.218 [2024-11-28 13:01:58.093272] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:28.218 [2024-11-28 13:01:58.093328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539352 ] 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.218 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.218 { 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme$subsystem", 00:31:28.219 "trtype": "$TEST_TRANSPORT", 00:31:28.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "$NVMF_PORT", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.219 "hdgst": ${hdgst:-false}, 00:31:28.219 "ddgst": ${ddgst:-false} 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 } 00:31:28.219 EOF 00:31:28.219 )") 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.219 { 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme$subsystem", 00:31:28.219 "trtype": "$TEST_TRANSPORT", 00:31:28.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "$NVMF_PORT", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.219 "hdgst": ${hdgst:-false}, 00:31:28.219 "ddgst": ${ddgst:-false} 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 } 00:31:28.219 EOF 00:31:28.219 )") 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.219 { 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme$subsystem", 00:31:28.219 "trtype": "$TEST_TRANSPORT", 00:31:28.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "$NVMF_PORT", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.219 "hdgst": ${hdgst:-false}, 00:31:28.219 "ddgst": ${ddgst:-false} 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 } 00:31:28.219 EOF 00:31:28.219 )") 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:31:28.219 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme1", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme2", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme3", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme4", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme5", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme6", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme7", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme8", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.219 "trsvcid": "4420", 00:31:28.219 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:28.219 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:28.219 "hdgst": false, 00:31:28.219 "ddgst": false 00:31:28.219 }, 00:31:28.219 "method": "bdev_nvme_attach_controller" 00:31:28.219 },{ 00:31:28.219 "params": { 00:31:28.219 "name": "Nvme9", 00:31:28.219 "trtype": "tcp", 00:31:28.219 "traddr": "10.0.0.2", 00:31:28.219 "adrfam": "ipv4", 00:31:28.220 "trsvcid": "4420", 00:31:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:28.220 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:28.220 "hdgst": false, 00:31:28.220 "ddgst": false 00:31:28.220 }, 00:31:28.220 "method": "bdev_nvme_attach_controller" 00:31:28.220 },{ 00:31:28.220 "params": { 00:31:28.220 "name": "Nvme10", 00:31:28.220 "trtype": "tcp", 00:31:28.220 "traddr": "10.0.0.2", 00:31:28.220 "adrfam": "ipv4", 00:31:28.220 "trsvcid": "4420", 00:31:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:28.220 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:28.220 "hdgst": false, 00:31:28.220 "ddgst": false 00:31:28.220 }, 00:31:28.220 "method": "bdev_nvme_attach_controller" 00:31:28.220 }' 00:31:28.220 [2024-11-28 13:01:58.228229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:28.220 [2024-11-28 13:01:58.287170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.220 [2024-11-28 13:01:58.305205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.602 Running I/O for 1 seconds... 00:31:30.806 1802.00 IOPS, 112.62 MiB/s 00:31:30.806 Latency(us) 00:31:30.806 [2024-11-28T12:02:00.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.806 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme1n1 : 1.06 240.53 15.03 0.00 0.00 262526.70 13356.82 241736.53 00:31:30.806 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme2n1 : 1.16 221.05 13.82 0.00 0.00 281824.10 20363.68 252246.82 00:31:30.806 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme3n1 : 1.10 232.55 14.53 0.00 0.00 262604.00 22224.87 246991.67 00:31:30.806 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme4n1 : 1.07 239.25 14.95 0.00 0.00 250183.76 36129.10 221591.82 00:31:30.806 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme5n1 : 1.17 222.47 13.90 0.00 0.00 263818.92 6705.78 255750.24 00:31:30.806 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme6n1 : 1.17 219.26 13.70 0.00 0.00 264715.38 19049.89 261005.39 00:31:30.806 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme7n1 : 1.19 268.13 16.76 0.00 0.00 213051.19 13466.30 248743.39 00:31:30.806 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme8n1 : 1.20 266.29 16.64 0.00 0.00 210713.92 20801.60 241736.53 00:31:30.806 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme9n1 : 1.21 265.18 16.57 0.00 0.00 207835.22 15874.91 245239.96 00:31:30.806 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:30.806 Verification LBA range: start 0x0 length 0x400 00:31:30.806 Nvme10n1 : 1.22 262.84 16.43 0.00 0.00 206159.12 9798.65 269763.96 00:31:30.806 [2024-11-28T12:02:00.933Z] =================================================================================================================== 00:31:30.806 [2024-11-28T12:02:00.933Z] Total : 2437.54 152.35 0.00 0.00 239386.72 6705.78 269763.96 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.806 rmmod nvme_tcp 00:31:30.806 rmmod nvme_fabrics 00:31:30.806 rmmod nvme_keyring 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3538587 ']' 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3538587 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3538587 ']' 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3538587 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.806 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3538587 00:31:31.066 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:31.066 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:31.066 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3538587' 00:31:31.066 killing process with pid 3538587 00:31:31.066 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3538587 00:31:31.066 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3538587 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.327 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.239 00:31:33.239 real 0m16.912s 00:31:33.239 user 0m33.449s 00:31:33.239 sys 0m7.079s 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.239 ************************************ 00:31:33.239 END TEST nvmf_shutdown_tc1 00:31:33.239 ************************************ 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.239 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:33.499 ************************************ 00:31:33.499 START TEST nvmf_shutdown_tc2 00:31:33.499 ************************************ 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.499 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:33.500 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:33.500 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:33.500 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:33.500 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.500 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.501 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.501 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.501 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:31:33.762 00:31:33.762 --- 10.0.0.2 ping statistics --- 00:31:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.762 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:31:33.762 00:31:33.762 --- 10.0.0.1 ping statistics --- 00:31:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.762 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3540592 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3540592 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3540592 ']' 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.762 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:33.762 [2024-11-28 13:02:03.800417] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:33.762 [2024-11-28 13:02:03.800468] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.023 [2024-11-28 13:02:03.933432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:34.023 [2024-11-28 13:02:03.987626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.023 [2024-11-28 13:02:04.004643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.023 [2024-11-28 13:02:04.004670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.023 [2024-11-28 13:02:04.004676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.023 [2024-11-28 13:02:04.004680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.023 [2024-11-28 13:02:04.004684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.023 [2024-11-28 13:02:04.005934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.023 [2024-11-28 13:02:04.006087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.023 [2024-11-28 13:02:04.006220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:34.023 [2024-11-28 13:02:04.006377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:34.595 [2024-11-28 13:02:04.661099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.595 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.856 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:34.856 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:34.856 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:34.856 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.856 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:34.856 Malloc1 00:31:34.856 [2024-11-28 13:02:04.771696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.856 Malloc2 00:31:34.856 Malloc3 00:31:34.856 Malloc4 00:31:34.856 Malloc5 00:31:34.856 Malloc6 00:31:34.856 Malloc7 00:31:35.118 Malloc8 00:31:35.118 Malloc9 00:31:35.118 Malloc10 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3540841 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3540841 /var/tmp/bdevperf.sock 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3540841 ']' 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.118 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.118 { 00:31:35.118 "params": { 00:31:35.118 "name": "Nvme$subsystem", 00:31:35.118 "trtype": "$TEST_TRANSPORT", 00:31:35.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.118 "adrfam": "ipv4", 00:31:35.118 "trsvcid": "$NVMF_PORT", 00:31:35.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.118 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 [2024-11-28 13:02:05.217175] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:35.119 [2024-11-28 13:02:05.217231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540841 ] 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.119 }, 00:31:35.119 "method": "bdev_nvme_attach_controller" 00:31:35.119 } 00:31:35.119 EOF 00:31:35.119 )") 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.119 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.119 { 00:31:35.119 "params": { 00:31:35.119 "name": "Nvme$subsystem", 00:31:35.119 "trtype": "$TEST_TRANSPORT", 00:31:35.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.119 "adrfam": "ipv4", 00:31:35.119 "trsvcid": "$NVMF_PORT", 00:31:35.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.119 "hdgst": ${hdgst:-false}, 00:31:35.119 "ddgst": ${ddgst:-false} 00:31:35.120 }, 00:31:35.120 "method": "bdev_nvme_attach_controller" 00:31:35.120 } 00:31:35.120 EOF 00:31:35.120 )") 00:31:35.120 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.120 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.120 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.120 { 00:31:35.120 "params": { 00:31:35.120 "name": "Nvme$subsystem", 00:31:35.120 "trtype": "$TEST_TRANSPORT", 00:31:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.120 "adrfam": "ipv4", 00:31:35.120 "trsvcid": "$NVMF_PORT", 00:31:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.120 "hdgst": ${hdgst:-false}, 00:31:35.120 "ddgst": ${ddgst:-false} 00:31:35.120 }, 00:31:35.120 "method": "bdev_nvme_attach_controller" 00:31:35.120 } 00:31:35.120 EOF 00:31:35.120 )") 00:31:35.120 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.120 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.120 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.120 { 00:31:35.120 "params": { 00:31:35.120 "name": "Nvme$subsystem", 00:31:35.120 "trtype": "$TEST_TRANSPORT", 00:31:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.120 "adrfam": "ipv4", 00:31:35.120 "trsvcid": "$NVMF_PORT", 00:31:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.120 "hdgst": ${hdgst:-false}, 00:31:35.120 "ddgst": ${ddgst:-false} 00:31:35.120 }, 00:31:35.120 "method": "bdev_nvme_attach_controller" 00:31:35.120 } 00:31:35.120 EOF 00:31:35.120 )") 00:31:35.381 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:35.381 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:31:35.381 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:31:35.381 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme1", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme2", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme3", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme4", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme5", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme6", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme7", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme8", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme9", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 },{ 00:31:35.381 "params": { 00:31:35.381 "name": "Nvme10", 00:31:35.381 "trtype": "tcp", 00:31:35.381 "traddr": "10.0.0.2", 00:31:35.381 "adrfam": "ipv4", 00:31:35.381 "trsvcid": "4420", 00:31:35.381 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:35.381 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:35.381 "hdgst": false, 00:31:35.381 "ddgst": false 00:31:35.381 }, 00:31:35.381 "method": "bdev_nvme_attach_controller" 00:31:35.381 }' 00:31:35.381 [2024-11-28 13:02:05.351047] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:35.381 [2024-11-28 13:02:05.408388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.381 [2024-11-28 13:02:05.426849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.296 Running I/O for 10 seconds... 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3540841 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3540841 ']' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3540841 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3540841 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3540841' 00:31:37.869 killing process with pid 3540841 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3540841 00:31:37.869 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3540841 00:31:37.869 Received shutdown signal, test time was about 0.895389 seconds 00:31:37.869 00:31:37.869 Latency(us) 00:31:37.869 [2024-11-28T12:02:07.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.869 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme1n1 : 0.85 225.27 14.08 0.00 0.00 280408.53 19378.34 290784.52 00:31:37.869 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme2n1 : 0.86 222.26 13.89 0.00 0.00 277713.67 18830.93 264508.81 00:31:37.869 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme3n1 : 0.89 287.17 17.95 0.00 0.00 210173.61 23319.69 219840.11 00:31:37.869 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme4n1 : 0.89 286.20 17.89 0.00 0.00 205944.86 17955.07 222467.68 00:31:37.869 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme5n1 : 0.88 218.73 13.67 0.00 0.00 262829.52 15218.02 252246.82 00:31:37.869 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme6n1 : 0.87 220.16 13.76 0.00 0.00 253591.96 32844.64 229474.53 00:31:37.869 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme7n1 : 0.87 220.54 13.78 0.00 0.00 247512.28 36785.99 225971.11 00:31:37.869 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme8n1 : 0.89 288.15 18.01 0.00 0.00 185207.40 19487.82 221591.82 00:31:37.869 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme9n1 : 0.86 223.63 13.98 0.00 0.00 230640.92 20363.68 246991.67 00:31:37.869 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:37.869 Verification LBA range: start 0x0 length 0x400 00:31:37.869 Nvme10n1 : 0.88 217.15 13.57 0.00 0.00 232609.32 15327.50 264508.81 00:31:37.869 [2024-11-28T12:02:07.996Z] =================================================================================================================== 00:31:37.869 [2024-11-28T12:02:07.996Z] Total : 2409.26 150.58 0.00 0.00 235188.55 15218.02 290784.52 00:31:38.130 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3540592 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.072 rmmod nvme_tcp 00:31:39.072 rmmod nvme_fabrics 00:31:39.072 rmmod nvme_keyring 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3540592 ']' 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3540592 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3540592 ']' 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3540592 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.072 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3540592 00:31:39.333 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:39.333 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:39.333 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3540592' 00:31:39.333 killing process with pid 3540592 00:31:39.333 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3540592 00:31:39.333 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3540592 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.595 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.509 00:31:41.509 real 0m8.172s 00:31:41.509 user 0m24.871s 00:31:41.509 sys 0m1.315s 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.509 ************************************ 00:31:41.509 END TEST nvmf_shutdown_tc2 00:31:41.509 ************************************ 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.509 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:41.770 ************************************ 00:31:41.770 START TEST nvmf_shutdown_tc3 00:31:41.770 ************************************ 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:41.770 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:41.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:41.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:41.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:41.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:41.771 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:42.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:31:42.033 00:31:42.033 --- 10.0.0.2 ping statistics --- 00:31:42.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.033 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:31:42.033 00:31:42.033 --- 10.0.0.1 ping statistics --- 00:31:42.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.033 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:42.033 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3542304 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3542304 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3542304 ']' 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:42.033 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.033 [2024-11-28 13:02:12.083817] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:42.033 [2024-11-28 13:02:12.083882] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.294 [2024-11-28 13:02:12.228637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:42.294 [2024-11-28 13:02:12.282932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.294 [2024-11-28 13:02:12.301394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.294 [2024-11-28 13:02:12.301426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.294 [2024-11-28 13:02:12.301432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.294 [2024-11-28 13:02:12.301437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.294 [2024-11-28 13:02:12.301441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.294 [2024-11-28 13:02:12.303114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.294 [2024-11-28 13:02:12.303273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:42.294 [2024-11-28 13:02:12.303402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.294 [2024-11-28 13:02:12.303404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.868 [2024-11-28 13:02:12.940320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:42.868 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:43.128 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:43.128 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:43.128 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:43.128 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:43.128 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:43.128 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.128 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:43.128 Malloc1 00:31:43.128 [2024-11-28 13:02:13.053702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.128 Malloc2 00:31:43.129 Malloc3 00:31:43.129 Malloc4 00:31:43.129 Malloc5 00:31:43.129 Malloc6 00:31:43.390 Malloc7 00:31:43.390 Malloc8 00:31:43.390 Malloc9 00:31:43.390 Malloc10 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3542681 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3542681 /var/tmp/bdevperf.sock 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3542681 ']' 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:43.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.390 { 00:31:43.390 "params": { 00:31:43.390 "name": "Nvme$subsystem", 00:31:43.390 "trtype": "$TEST_TRANSPORT", 00:31:43.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.390 "adrfam": "ipv4", 00:31:43.390 "trsvcid": "$NVMF_PORT", 00:31:43.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.390 "hdgst": ${hdgst:-false}, 00:31:43.390 "ddgst": ${ddgst:-false} 00:31:43.390 }, 00:31:43.390 "method": "bdev_nvme_attach_controller" 00:31:43.390 } 00:31:43.390 EOF 00:31:43.390 )") 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.390 { 00:31:43.390 "params": { 00:31:43.390 "name": "Nvme$subsystem", 00:31:43.390 "trtype": "$TEST_TRANSPORT", 00:31:43.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.390 "adrfam": "ipv4", 00:31:43.390 "trsvcid": "$NVMF_PORT", 00:31:43.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.390 "hdgst": ${hdgst:-false}, 00:31:43.390 "ddgst": ${ddgst:-false} 00:31:43.390 }, 00:31:43.390 "method": "bdev_nvme_attach_controller" 00:31:43.390 } 00:31:43.390 EOF 00:31:43.390 )") 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.390 { 00:31:43.390 "params": { 00:31:43.390 "name": "Nvme$subsystem", 00:31:43.390 "trtype": "$TEST_TRANSPORT", 00:31:43.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.390 "adrfam": "ipv4", 00:31:43.390 "trsvcid": "$NVMF_PORT", 00:31:43.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.390 "hdgst": ${hdgst:-false}, 00:31:43.390 "ddgst": ${ddgst:-false} 00:31:43.390 }, 00:31:43.390 "method": "bdev_nvme_attach_controller" 00:31:43.390 } 00:31:43.390 EOF 00:31:43.390 )") 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.390 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.390 { 00:31:43.390 "params": { 00:31:43.390 "name": "Nvme$subsystem", 00:31:43.391 "trtype": "$TEST_TRANSPORT", 00:31:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.391 "adrfam": "ipv4", 00:31:43.391 "trsvcid": "$NVMF_PORT", 00:31:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.391 "hdgst": ${hdgst:-false}, 00:31:43.391 "ddgst": ${ddgst:-false} 00:31:43.391 }, 00:31:43.391 "method": "bdev_nvme_attach_controller" 00:31:43.391 } 00:31:43.391 EOF 00:31:43.391 )") 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.391 { 00:31:43.391 "params": { 00:31:43.391 "name": "Nvme$subsystem", 00:31:43.391 "trtype": "$TEST_TRANSPORT", 00:31:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.391 "adrfam": "ipv4", 00:31:43.391 "trsvcid": "$NVMF_PORT", 00:31:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.391 "hdgst": ${hdgst:-false}, 00:31:43.391 "ddgst": ${ddgst:-false} 00:31:43.391 }, 00:31:43.391 "method": "bdev_nvme_attach_controller" 00:31:43.391 } 00:31:43.391 EOF 00:31:43.391 )") 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.391 { 00:31:43.391 "params": { 00:31:43.391 "name": "Nvme$subsystem", 00:31:43.391 "trtype": "$TEST_TRANSPORT", 00:31:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.391 "adrfam": "ipv4", 00:31:43.391 "trsvcid": "$NVMF_PORT", 00:31:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.391 "hdgst": ${hdgst:-false}, 00:31:43.391 "ddgst": ${ddgst:-false} 00:31:43.391 }, 00:31:43.391 "method": "bdev_nvme_attach_controller" 00:31:43.391 } 00:31:43.391 EOF 00:31:43.391 )") 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.391 { 00:31:43.391 "params": { 00:31:43.391 "name": "Nvme$subsystem", 00:31:43.391 "trtype": "$TEST_TRANSPORT", 00:31:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.391 "adrfam": "ipv4", 00:31:43.391 "trsvcid": "$NVMF_PORT", 00:31:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.391 "hdgst": ${hdgst:-false}, 00:31:43.391 "ddgst": ${ddgst:-false} 00:31:43.391 }, 00:31:43.391 "method": "bdev_nvme_attach_controller" 00:31:43.391 } 00:31:43.391 EOF 00:31:43.391 )") 00:31:43.391 [2024-11-28 13:02:13.497947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:43.391 [2024-11-28 13:02:13.498002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542681 ] 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.391 { 00:31:43.391 "params": { 00:31:43.391 "name": "Nvme$subsystem", 00:31:43.391 "trtype": "$TEST_TRANSPORT", 00:31:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.391 "adrfam": "ipv4", 00:31:43.391 "trsvcid": "$NVMF_PORT", 00:31:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.391 "hdgst": ${hdgst:-false}, 00:31:43.391 "ddgst": ${ddgst:-false} 00:31:43.391 }, 00:31:43.391 "method": "bdev_nvme_attach_controller" 00:31:43.391 } 00:31:43.391 EOF 00:31:43.391 )") 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.391 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.391 { 00:31:43.391 "params": { 00:31:43.391 "name": "Nvme$subsystem", 00:31:43.391 "trtype": "$TEST_TRANSPORT", 00:31:43.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.391 "adrfam": "ipv4", 00:31:43.391 "trsvcid": "$NVMF_PORT", 00:31:43.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.391 "hdgst": ${hdgst:-false}, 00:31:43.391 "ddgst": ${ddgst:-false} 00:31:43.391 }, 00:31:43.391 "method": "bdev_nvme_attach_controller" 00:31:43.391 } 00:31:43.391 EOF 00:31:43.391 )") 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.652 { 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme$subsystem", 00:31:43.652 "trtype": "$TEST_TRANSPORT", 00:31:43.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "$NVMF_PORT", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.652 "hdgst": ${hdgst:-false}, 00:31:43.652 "ddgst": ${ddgst:-false} 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 } 00:31:43.652 EOF 00:31:43.652 )") 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:31:43.652 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme1", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme2", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme3", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme4", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme5", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme6", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme7", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme8", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme9", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:43.652 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:43.652 "hdgst": false, 00:31:43.652 "ddgst": false 00:31:43.652 }, 00:31:43.652 "method": "bdev_nvme_attach_controller" 00:31:43.652 },{ 00:31:43.652 "params": { 00:31:43.652 "name": "Nvme10", 00:31:43.652 "trtype": "tcp", 00:31:43.652 "traddr": "10.0.0.2", 00:31:43.652 "adrfam": "ipv4", 00:31:43.652 "trsvcid": "4420", 00:31:43.652 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:43.653 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:43.653 "hdgst": false, 00:31:43.653 "ddgst": false 00:31:43.653 }, 00:31:43.653 "method": "bdev_nvme_attach_controller" 00:31:43.653 }' 00:31:43.653 [2024-11-28 13:02:13.631562] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:43.653 [2024-11-28 13:02:13.691366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.653 [2024-11-28 13:02:13.709362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.054 Running I/O for 10 seconds... 00:31:45.054 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.054 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:31:45.054 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:45.054 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.054 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:31:45.314 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:31:45.573 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:45.833 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:45.833 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:45.833 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3542304 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3542304 ']' 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3542304 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.834 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542304 00:31:46.110 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:46.110 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:46.110 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542304' 00:31:46.110 killing process with pid 3542304 00:31:46.110 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3542304 00:31:46.110 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3542304 00:31:46.110 [2024-11-28 13:02:16.013208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.110 [2024-11-28 13:02:16.013388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.013549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00710 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015253] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.111 [2024-11-28 13:02:16.015612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.111 [2024-11-28 13:02:16.015866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.015938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89a80 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.020116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d010d0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.020139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d010d0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.020145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d010d0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.112 [2024-11-28 13:02:16.021385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.021389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.021394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d015c0 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.022802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01f60 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.113 [2024-11-28 13:02:16.023828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.023992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02430 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.114 [2024-11-28 13:02:16.024829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.024970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02900 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.115 [2024-11-28 13:02:16.025570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.025723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89590 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.035358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877610 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.035481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5a10 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.035571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977180 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.035666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978980 is same with the state(6) to be set 00:31:46.116 [2024-11-28 13:02:16.035752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.116 [2024-11-28 13:02:16.035792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.116 [2024-11-28 13:02:16.035800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda1850 is same with the state(6) to be set 00:31:46.117 [2024-11-28 13:02:16.035839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd9d80 is same with the state(6) to be set 00:31:46.117 [2024-11-28 13:02:16.035934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.035990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.035997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd8500 is same with the state(6) to be set 00:31:46.117 [2024-11-28 13:02:16.036022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a40 is same with the state(6) to be set 00:31:46.117 [2024-11-28 13:02:16.036109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979880 is same with the state(6) to be set 00:31:46.117 [2024-11-28 13:02:16.036205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.117 [2024-11-28 13:02:16.036260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9793f0 is same with the state(6) to be set 00:31:46.117 [2024-11-28 13:02:16.036684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.117 [2024-11-28 13:02:16.036946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.117 [2024-11-28 13:02:16.036955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.036963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.036972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.036980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.036989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.036997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.118 [2024-11-28 13:02:16.037619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.118 [2024-11-28 13:02:16.037629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.037827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.037855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:46.119 [2024-11-28 13:02:16.060285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.119 [2024-11-28 13:02:16.060825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.119 [2024-11-28 13:02:16.060833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.060988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.060996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.120 [2024-11-28 13:02:16.061391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.120 [2024-11-28 13:02:16.061401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.061556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.061846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:31:46.121 [2024-11-28 13:02:16.061887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977180 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.061923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x877610 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.061944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5a10 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.061965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978980 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.061981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda1850 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.061994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd9d80 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.062012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd8500 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.062032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4a40 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.062049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979880 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.062066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9793f0 (9): Bad file descriptor 00:31:46.121 [2024-11-28 13:02:16.062117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.121 [2024-11-28 13:02:16.062538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.121 [2024-11-28 13:02:16.062545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.062983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.062990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.122 [2024-11-28 13:02:16.063152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.122 [2024-11-28 13:02:16.063163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.063173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.063180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.063190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.063197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.063207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.063214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.063225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.063232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.063241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea2dc0 is same with the state(6) to be set 00:31:46.123 [2024-11-28 13:02:16.064574] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.123 [2024-11-28 13:02:16.064731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:31:46.123 [2024-11-28 13:02:16.066274] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.123 [2024-11-28 13:02:16.066333] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.123 [2024-11-28 13:02:16.066373] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.123 [2024-11-28 13:02:16.066413] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.123 [2024-11-28 13:02:16.066427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:31:46.123 [2024-11-28 13:02:16.066793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.123 [2024-11-28 13:02:16.066810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977180 with addr=10.0.0.2, port=4420 00:31:46.123 [2024-11-28 13:02:16.066818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977180 is same with the state(6) to be set 00:31:46.123 [2024-11-28 13:02:16.067142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.123 [2024-11-28 13:02:16.067152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda1850 with addr=10.0.0.2, port=4420 00:31:46.123 [2024-11-28 13:02:16.067166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda1850 is same with the state(6) to be set 00:31:46.123 [2024-11-28 13:02:16.067244] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:46.123 [2024-11-28 13:02:16.067910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.123 [2024-11-28 13:02:16.067924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9793f0 with addr=10.0.0.2, port=4420 00:31:46.123 [2024-11-28 13:02:16.067932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9793f0 is same with the state(6) to be set 00:31:46.123 [2024-11-28 13:02:16.067947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977180 (9): Bad file descriptor 00:31:46.123 [2024-11-28 13:02:16.067958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda1850 (9): Bad file descriptor 00:31:46.123 [2024-11-28 13:02:16.068280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9793f0 (9): Bad file descriptor 00:31:46.123 [2024-11-28 13:02:16.068293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:31:46.123 [2024-11-28 13:02:16.068309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:31:46.123 [2024-11-28 13:02:16.068318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:31:46.123 [2024-11-28 13:02:16.068327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:31:46.123 [2024-11-28 13:02:16.068336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:31:46.123 [2024-11-28 13:02:16.068343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:31:46.123 [2024-11-28 13:02:16.068350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:31:46.123 [2024-11-28 13:02:16.068357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:31:46.123 [2024-11-28 13:02:16.068406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:31:46.123 [2024-11-28 13:02:16.068414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:31:46.123 [2024-11-28 13:02:16.068421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:31:46.123 [2024-11-28 13:02:16.068428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:31:46.123 [2024-11-28 13:02:16.071984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.123 [2024-11-28 13:02:16.072296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.123 [2024-11-28 13:02:16.072303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.124 [2024-11-28 13:02:16.072836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.124 [2024-11-28 13:02:16.072846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.072988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.072996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.073012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.073030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.073046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.073063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.073080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.073097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.073105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1df0 is same with the state(6) to be set 00:31:46.125 [2024-11-28 13:02:16.074390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.125 [2024-11-28 13:02:16.074743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.125 [2024-11-28 13:02:16.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.074988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.074997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.126 [2024-11-28 13:02:16.075288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.126 [2024-11-28 13:02:16.075295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.075481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.075489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5020 is same with the state(6) to be set 00:31:46.127 [2024-11-28 13:02:16.076760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.076989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.076997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.127 [2024-11-28 13:02:16.077163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.127 [2024-11-28 13:02:16.077171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.077665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.084213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.084234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.084252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.084269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.084286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.128 [2024-11-28 13:02:16.084304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.128 [2024-11-28 13:02:16.084311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.084321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.084328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.084338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.084345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.084354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.084362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.084371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.084379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.084389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea70e0 is same with the state(6) to be set 00:31:46.129 [2024-11-28 13:02:16.085731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.085988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.085997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.129 [2024-11-28 13:02:16.086254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.129 [2024-11-28 13:02:16.086264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.086855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.086863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a77270 is same with the state(6) to be set 00:31:46.130 [2024-11-28 13:02:16.088140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.088153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.088172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.088181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.088193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.088202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.088214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.088223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.088234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.130 [2024-11-28 13:02:16.088243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.130 [2024-11-28 13:02:16.088254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.131 [2024-11-28 13:02:16.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.131 [2024-11-28 13:02:16.088881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.088897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.088914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.088934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.088951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.088968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.088993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.089312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.089321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4d20 is same with the state(6) to be set 00:31:46.132 [2024-11-28 13:02:16.090600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.132 [2024-11-28 13:02:16.090883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.132 [2024-11-28 13:02:16.090893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.090900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.090910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.090917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.090927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.090934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.090944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.090951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.090961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.090969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.090978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.090986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.090995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.133 [2024-11-28 13:02:16.091537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.133 [2024-11-28 13:02:16.091546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.091724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.091733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7da30 is same with the state(6) to be set 00:31:46.134 [2024-11-28 13:02:16.093000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.134 [2024-11-28 13:02:16.093375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.134 [2024-11-28 13:02:16.093384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.093989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.093999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.094006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.094017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.094025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.094034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.094041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.094051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.094058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.135 [2024-11-28 13:02:16.094067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.135 [2024-11-28 13:02:16.094075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.136 [2024-11-28 13:02:16.094084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.136 [2024-11-28 13:02:16.094091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.136 [2024-11-28 13:02:16.094101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.136 [2024-11-28 13:02:16.094108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.136 [2024-11-28 13:02:16.094117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7ec90 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.095642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.095670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.095682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.095695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.095786] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.095802] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.095814] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.114879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.114905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:31:46.136 task offset: 30720 on job bdev=Nvme3n1 fails 00:31:46.136 00:31:46.136 Latency(us) 00:31:46.136 [2024-11-28T12:02:16.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.136 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme1n1 ended in about 0.99 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme1n1 : 0.99 199.36 12.46 64.44 0.00 240005.50 19268.85 241736.53 00:31:46.136 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme2n1 ended in about 0.98 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme2n1 : 0.98 194.97 12.19 64.99 0.00 238823.33 6158.37 255750.24 00:31:46.136 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme3n1 ended in about 0.98 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme3n1 : 0.98 196.11 12.26 65.37 0.00 232583.66 15108.53 224219.39 00:31:46.136 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme4n1 ended in about 1.00 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme4n1 : 1.00 192.86 12.05 64.29 0.00 231974.55 13740.01 252246.82 00:31:46.136 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme5n1 ended in about 0.98 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme5n1 : 0.98 195.25 12.20 65.08 0.00 224161.02 25180.89 243488.25 00:31:46.136 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme6n1 ended in about 1.00 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme6n1 : 1.00 195.13 12.20 63.72 0.00 221250.32 32625.67 221591.82 00:31:46.136 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme7n1 ended in about 1.01 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme7n1 : 1.01 190.68 11.92 63.56 0.00 220600.49 19487.82 253998.53 00:31:46.136 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme8n1 ended in about 1.01 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme8n1 : 1.01 190.22 11.89 63.41 0.00 216421.36 16750.77 248743.39 00:31:46.136 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme9n1 ended in about 1.01 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme9n1 : 1.01 126.51 7.91 63.26 0.00 283216.57 19706.78 253998.53 00:31:46.136 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.136 Job: Nvme10n1 ended in about 1.01 seconds with error 00:31:46.136 Verification LBA range: start 0x0 length 0x400 00:31:46.136 Nvme10n1 : 1.01 126.21 7.89 63.11 0.00 277615.02 16860.25 275019.10 00:31:46.136 [2024-11-28T12:02:16.263Z] =================================================================================================================== 00:31:46.136 [2024-11-28T12:02:16.263Z] Total : 1807.32 112.96 641.22 0.00 236451.55 6158.37 275019.10 00:31:46.136 [2024-11-28 13:02:16.142497] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:46.136 [2024-11-28 13:02:16.142543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.142992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.143012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979880 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.143023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979880 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.143330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.143341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978980 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.143349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978980 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.143680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.143691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda4a40 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.143698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4a40 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.144005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.144015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x877610 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.144022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877610 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.144050] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.144064] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.144077] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.144097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x877610 (9): Bad file descriptor 00:31:46.136 [2024-11-28 13:02:16.144113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4a40 (9): Bad file descriptor 00:31:46.136 [2024-11-28 13:02:16.144126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978980 (9): Bad file descriptor 00:31:46.136 [2024-11-28 13:02:16.144140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979880 (9): Bad file descriptor 00:31:46.136 1807.32 IOPS, 112.96 MiB/s [2024-11-28T12:02:16.263Z] [2024-11-28 13:02:16.146000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.146015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.146376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.146391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd9d80 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.146399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd9d80 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.146750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.146760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd8500 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.146767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd8500 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.146955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.146965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5a10 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.146972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5a10 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.146999] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.147011] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.147022] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.147033] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.147045] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:31:46.136 [2024-11-28 13:02:16.147315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:31:46.136 [2024-11-28 13:02:16.147629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.147644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda1850 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.147652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda1850 is same with the state(6) to be set 00:31:46.136 [2024-11-28 13:02:16.147994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.136 [2024-11-28 13:02:16.148004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977180 with addr=10.0.0.2, port=4420 00:31:46.136 [2024-11-28 13:02:16.148011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977180 is same with the state(6) to be set 00:31:46.137 [2024-11-28 13:02:16.148021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd9d80 (9): Bad file descriptor 00:31:46.137 [2024-11-28 13:02:16.148032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd8500 (9): Bad file descriptor 00:31:46.137 [2024-11-28 13:02:16.148041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5a10 (9): Bad file descriptor 00:31:46.137 [2024-11-28 13:02:16.148050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.137 [2024-11-28 13:02:16.148447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9793f0 with addr=10.0.0.2, port=4420 00:31:46.137 [2024-11-28 13:02:16.148454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9793f0 is same with the state(6) to be set 00:31:46.137 [2024-11-28 13:02:16.148463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda1850 (9): Bad file descriptor 00:31:46.137 [2024-11-28 13:02:16.148475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977180 (9): Bad file descriptor 00:31:46.137 [2024-11-28 13:02:16.148484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9793f0 (9): Bad file descriptor 00:31:46.137 [2024-11-28 13:02:16.148596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:31:46.137 [2024-11-28 13:02:16.148672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:31:46.137 [2024-11-28 13:02:16.148679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:31:46.137 [2024-11-28 13:02:16.148685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:31:46.137 [2024-11-28 13:02:16.148692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:31:46.397 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3542681 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3542681 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3542681 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.339 rmmod nvme_tcp 00:31:47.339 rmmod nvme_fabrics 00:31:47.339 rmmod nvme_keyring 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3542304 ']' 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3542304 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3542304 ']' 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3542304 00:31:47.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3542304) - No such process 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3542304 is not found' 00:31:47.339 Process with pid 3542304 is not found 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.339 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.882 00:31:49.882 real 0m7.815s 00:31:49.882 user 0m18.671s 00:31:49.882 sys 0m1.280s 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:49.882 ************************************ 00:31:49.882 END TEST nvmf_shutdown_tc3 00:31:49.882 ************************************ 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:49.882 ************************************ 00:31:49.882 START TEST nvmf_shutdown_tc4 00:31:49.882 ************************************ 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.882 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:49.883 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:49.883 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:49.883 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:49.883 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.883 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:31:49.884 00:31:49.884 --- 10.0.0.2 ping statistics --- 00:31:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.884 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:31:49.884 00:31:49.884 --- 10.0.0.1 ping statistics --- 00:31:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.884 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3543951 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3543951 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3543951 ']' 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.884 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:49.884 [2024-11-28 13:02:19.996354] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:31:49.884 [2024-11-28 13:02:19.996421] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.143 [2024-11-28 13:02:20.142925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:50.143 [2024-11-28 13:02:20.196036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.143 [2024-11-28 13:02:20.219192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.143 [2024-11-28 13:02:20.219234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.143 [2024-11-28 13:02:20.219240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.143 [2024-11-28 13:02:20.219246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.143 [2024-11-28 13:02:20.219250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.143 [2024-11-28 13:02:20.220800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.143 [2024-11-28 13:02:20.220960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.143 [2024-11-28 13:02:20.221080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:50.143 [2024-11-28 13:02:20.221079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.713 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.713 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:31:50.713 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:50.713 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.713 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:50.974 [2024-11-28 13:02:20.846031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.974 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:50.974 Malloc1 00:31:50.974 [2024-11-28 13:02:20.955855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.974 Malloc2 00:31:50.974 Malloc3 00:31:50.974 Malloc4 00:31:50.974 Malloc5 00:31:51.234 Malloc6 00:31:51.234 Malloc7 00:31:51.234 Malloc8 00:31:51.234 Malloc9 00:31:51.234 Malloc10 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3544210 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:31:51.234 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:31:51.493 [2024-11-28 13:02:21.544476] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3543951 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3543951 ']' 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3543951 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543951 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543951' 00:31:56.780 killing process with pid 3543951 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3543951 00:31:56.780 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3543951 00:31:56.780 [2024-11-28 13:02:26.428611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aaa20 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aaa20 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 [2024-11-28 13:02:26.428881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b640 is same with the state(6) to be set 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.780 starting I/O failed: -6 00:31:56.780 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 [2024-11-28 13:02:26.430290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 [2024-11-28 13:02:26.431258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.781 starting I/O failed: -6 00:31:56.781 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 [2024-11-28 13:02:26.432502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.782 NVMe io qpair process completion error 00:31:56.782 [2024-11-28 13:02:26.434352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ab780 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ab780 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ab780 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ab780 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ab780 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24abc70 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24abc70 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24abc70 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24abc70 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.434876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aadc0 is same with the state(6) to be set 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 [2024-11-28 13:02:26.435534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b170 is same with starting I/O failed: -6 00:31:56.782 the state(6) to be set 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 [2024-11-28 13:02:26.435548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b170 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.435554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b170 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.435559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b170 is same with the state(6) to be set 00:31:56.782 [2024-11-28 13:02:26.435564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b170 is same with the state(6) to be set 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 [2024-11-28 13:02:26.435569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b170 is same with the state(6) to be set 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 [2024-11-28 13:02:26.436289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.782 NVMe io qpair process completion error 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 starting I/O failed: -6 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.782 Write completed with error (sct=0, sc=8) 00:31:56.783 [2024-11-28 13:02:26.437535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 [2024-11-28 13:02:26.438348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.783 starting I/O failed: -6 00:31:56.783 starting I/O failed: -6 00:31:56.783 starting I/O failed: -6 00:31:56.783 starting I/O failed: -6 00:31:56.783 starting I/O failed: -6 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 [2024-11-28 13:02:26.439335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.783 Write completed with error (sct=0, sc=8) 00:31:56.783 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 [2024-11-28 13:02:26.440760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.784 NVMe io qpair process completion error 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 [2024-11-28 13:02:26.441955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 [2024-11-28 13:02:26.442851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 Write completed with error (sct=0, sc=8) 00:31:56.784 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 [2024-11-28 13:02:26.443767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 [2024-11-28 13:02:26.446646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.785 NVMe io qpair process completion error 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.785 starting I/O failed: -6 00:31:56.785 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 [2024-11-28 13:02:26.447780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 [2024-11-28 13:02:26.448670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 [2024-11-28 13:02:26.449561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.786 starting I/O failed: -6 00:31:56.786 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 [2024-11-28 13:02:26.451244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.787 NVMe io qpair process completion error 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 [2024-11-28 13:02:26.452315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.787 starting I/O failed: -6 00:31:56.787 starting I/O failed: -6 00:31:56.787 starting I/O failed: -6 00:31:56.787 starting I/O failed: -6 00:31:56.787 starting I/O failed: -6 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 [2024-11-28 13:02:26.453344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.787 Write completed with error (sct=0, sc=8) 00:31:56.787 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 [2024-11-28 13:02:26.454275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 [2024-11-28 13:02:26.455950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.788 NVMe io qpair process completion error 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 starting I/O failed: -6 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.788 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 [2024-11-28 13:02:26.457176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 [2024-11-28 13:02:26.457967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 [2024-11-28 13:02:26.458889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.789 starting I/O failed: -6 00:31:56.789 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 [2024-11-28 13:02:26.462569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.790 NVMe io qpair process completion error 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 [2024-11-28 13:02:26.463647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 [2024-11-28 13:02:26.464468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.790 starting I/O failed: -6 00:31:56.790 starting I/O failed: -6 00:31:56.790 starting I/O failed: -6 00:31:56.790 starting I/O failed: -6 00:31:56.790 starting I/O failed: -6 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 starting I/O failed: -6 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.790 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 [2024-11-28 13:02:26.465649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 [2024-11-28 13:02:26.467327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.791 NVMe io qpair process completion error 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 Write completed with error (sct=0, sc=8) 00:31:56.791 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 [2024-11-28 13:02:26.468431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 [2024-11-28 13:02:26.469243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.792 starting I/O failed: -6 00:31:56.792 starting I/O failed: -6 00:31:56.792 starting I/O failed: -6 00:31:56.792 starting I/O failed: -6 00:31:56.792 starting I/O failed: -6 00:31:56.792 starting I/O failed: -6 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 [2024-11-28 13:02:26.470591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.792 starting I/O failed: -6 00:31:56.792 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 [2024-11-28 13:02:26.473759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.793 NVMe io qpair process completion error 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 [2024-11-28 13:02:26.474877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 [2024-11-28 13:02:26.475786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.793 Write completed with error (sct=0, sc=8) 00:31:56.793 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 [2024-11-28 13:02:26.476702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 starting I/O failed: -6 00:31:56.794 [2024-11-28 13:02:26.478355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:56.794 NVMe io qpair process completion error 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.794 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 Write completed with error (sct=0, sc=8) 00:31:56.795 [2024-11-28 13:02:26.480301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:56.795 NVMe io qpair process completion error 00:31:56.795 Initializing NVMe Controllers 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:31:56.795 Controller IO queue size 128, less than required. 00:31:56.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:31:56.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:31:56.795 Initialization complete. Launching workers. 00:31:56.795 ======================================================== 00:31:56.795 Latency(us) 00:31:56.795 Device Information : IOPS MiB/s Average min max 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1935.40 83.16 66155.39 767.54 133559.54 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1882.75 80.90 67280.97 556.98 148980.80 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1912.15 82.16 66758.01 472.01 133138.08 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1875.07 80.57 67586.80 721.88 119641.92 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1885.38 81.01 67237.46 789.73 117753.27 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1828.12 78.55 69385.81 908.94 120951.45 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1898.11 81.56 66853.43 493.74 116026.74 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1893.28 81.35 67046.30 847.46 124719.72 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1906.22 81.91 66646.23 501.23 119529.90 00:31:56.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1882.53 80.89 67512.79 690.27 130269.55 00:31:56.795 ======================================================== 00:31:56.795 Total : 18899.02 812.07 67235.02 472.01 148980.80 00:31:56.795 00:31:56.795 [2024-11-28 13:02:26.485933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2475e40 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.485978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24783b0 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2475b10 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2477b30 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2478be0 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2475300 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473140 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471250 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472b60 is same with the state(6) to be set 00:31:56.795 [2024-11-28 13:02:26.486227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2475630 is same with the state(6) to be set 00:31:56.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:56.795 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3544210 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3544210 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3544210 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.736 rmmod nvme_tcp 00:31:57.736 rmmod nvme_fabrics 00:31:57.736 rmmod nvme_keyring 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3543951 ']' 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3543951 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3543951 ']' 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3543951 00:31:57.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3543951) - No such process 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3543951 is not found' 00:31:57.736 Process with pid 3543951 is not found 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.736 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.737 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.281 00:32:00.281 real 0m10.301s 00:32:00.281 user 0m27.638s 00:32:00.281 sys 0m3.910s 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:00.281 ************************************ 00:32:00.281 END TEST nvmf_shutdown_tc4 00:32:00.281 ************************************ 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:32:00.281 00:32:00.281 real 0m43.791s 00:32:00.281 user 1m44.893s 00:32:00.281 sys 0m13.946s 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:00.281 ************************************ 00:32:00.281 END TEST nvmf_shutdown 00:32:00.281 ************************************ 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:00.281 ************************************ 00:32:00.281 START TEST nvmf_nsid 00:32:00.281 ************************************ 00:32:00.281 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:32:00.281 * Looking for test storage... 00:32:00.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.281 --rc genhtml_branch_coverage=1 00:32:00.281 --rc genhtml_function_coverage=1 00:32:00.281 --rc genhtml_legend=1 00:32:00.281 --rc geninfo_all_blocks=1 00:32:00.281 --rc geninfo_unexecuted_blocks=1 00:32:00.281 00:32:00.281 ' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.281 --rc genhtml_branch_coverage=1 00:32:00.281 --rc genhtml_function_coverage=1 00:32:00.281 --rc genhtml_legend=1 00:32:00.281 --rc geninfo_all_blocks=1 00:32:00.281 --rc geninfo_unexecuted_blocks=1 00:32:00.281 00:32:00.281 ' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.281 --rc genhtml_branch_coverage=1 00:32:00.281 --rc genhtml_function_coverage=1 00:32:00.281 --rc genhtml_legend=1 00:32:00.281 --rc geninfo_all_blocks=1 00:32:00.281 --rc geninfo_unexecuted_blocks=1 00:32:00.281 00:32:00.281 ' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.281 --rc genhtml_branch_coverage=1 00:32:00.281 --rc genhtml_function_coverage=1 00:32:00.281 --rc genhtml_legend=1 00:32:00.281 --rc geninfo_all_blocks=1 00:32:00.281 --rc geninfo_unexecuted_blocks=1 00:32:00.281 00:32:00.281 ' 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.281 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:00.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.282 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.420 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:08.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:08.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:08.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:08.421 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:32:08.421 00:32:08.421 --- 10.0.0.2 ping statistics --- 00:32:08.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.421 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:32:08.421 00:32:08.421 --- 10.0.0.1 ping statistics --- 00:32:08.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.421 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3549584 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3549584 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3549584 ']' 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.421 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:08.422 [2024-11-28 13:02:37.901286] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:08.422 [2024-11-28 13:02:37.901356] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.422 [2024-11-28 13:02:38.045073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:08.422 [2024-11-28 13:02:38.104311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.422 [2024-11-28 13:02:38.130674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.422 [2024-11-28 13:02:38.130721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.422 [2024-11-28 13:02:38.130730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.422 [2024-11-28 13:02:38.130738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.422 [2024-11-28 13:02:38.130744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.422 [2024-11-28 13:02:38.131507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3549901 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=23e9b6c6-9b9d-4b19-950d-5cc8c5298d20 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=242ade97-2a37-4232-8b20-e74b14f0ced5 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8cc12e51-c3c6-461f-ac8c-7742209b333f 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.683 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:08.683 null0 00:32:08.683 null1 00:32:08.944 [2024-11-28 13:02:38.809650] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:08.944 [2024-11-28 13:02:38.809719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549901 ] 00:32:08.944 null2 00:32:08.944 [2024-11-28 13:02:38.813518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.944 [2024-11-28 13:02:38.837767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3549901 /var/tmp/tgt2.sock 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3549901 ']' 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:32:08.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.944 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 [2024-11-28 13:02:38.946823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:08.944 [2024-11-28 13:02:39.006881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.944 [2024-11-28 13:02:39.034690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.204 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.204 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:32:09.204 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:32:09.465 [2024-11-28 13:02:39.540568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.465 [2024-11-28 13:02:39.556712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:32:09.465 nvme0n1 nvme0n2 00:32:09.465 nvme1n1 00:32:09.726 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:32:09.726 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:32:09.726 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:32:11.110 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 23e9b6c6-9b9d-4b19-950d-5cc8c5298d20 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:32:12.053 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=23e9b6c69b9d4b19950d5cc8c5298d20 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 23E9B6C69B9D4B19950D5CC8C5298D20 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 23E9B6C69B9D4B19950D5CC8C5298D20 == \2\3\E\9\B\6\C\6\9\B\9\D\4\B\1\9\9\5\0\D\5\C\C\8\C\5\2\9\8\D\2\0 ]] 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:32:12.054 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 242ade97-2a37-4232-8b20-e74b14f0ced5 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=242ade972a3742328b20e74b14f0ced5 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 242ADE972A3742328B20E74B14F0CED5 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 242ADE972A3742328B20E74B14F0CED5 == \2\4\2\A\D\E\9\7\2\A\3\7\4\2\3\2\8\B\2\0\E\7\4\B\1\4\F\0\C\E\D\5 ]] 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8cc12e51-c3c6-461f-ac8c-7742209b333f 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8cc12e51c3c6461fac8c7742209b333f 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8CC12E51C3C6461FAC8C7742209B333F 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8CC12E51C3C6461FAC8C7742209B333F == \8\C\C\1\2\E\5\1\C\3\C\6\4\6\1\F\A\C\8\C\7\7\4\2\2\0\9\B\3\3\3\F ]] 00:32:12.315 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3549901 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3549901 ']' 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3549901 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549901 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549901' 00:32:12.575 killing process with pid 3549901 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3549901 00:32:12.575 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3549901 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:12.836 rmmod nvme_tcp 00:32:12.836 rmmod nvme_fabrics 00:32:12.836 rmmod nvme_keyring 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3549584 ']' 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3549584 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3549584 ']' 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3549584 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549584 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549584' 00:32:12.836 killing process with pid 3549584 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3549584 00:32:12.836 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3549584 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.098 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.012 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.012 00:32:15.012 real 0m15.103s 00:32:15.012 user 0m11.256s 00:32:15.012 sys 0m7.014s 00:32:15.012 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.012 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:15.012 ************************************ 00:32:15.012 END TEST nvmf_nsid 00:32:15.012 ************************************ 00:32:15.012 13:02:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:15.012 00:32:15.012 real 19m59.771s 00:32:15.012 user 52m0.464s 00:32:15.012 sys 4m56.073s 00:32:15.012 13:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.012 13:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:15.012 ************************************ 00:32:15.012 END TEST nvmf_target_extra 00:32:15.012 ************************************ 00:32:15.272 13:02:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:15.272 13:02:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:15.272 13:02:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.272 13:02:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:15.272 ************************************ 00:32:15.272 START TEST nvmf_host 00:32:15.272 ************************************ 00:32:15.272 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:15.272 * Looking for test storage... 00:32:15.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:15.272 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:15.272 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:15.273 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.534 --rc genhtml_branch_coverage=1 00:32:15.534 --rc genhtml_function_coverage=1 00:32:15.534 --rc genhtml_legend=1 00:32:15.534 --rc geninfo_all_blocks=1 00:32:15.534 --rc geninfo_unexecuted_blocks=1 00:32:15.534 00:32:15.534 ' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.534 --rc genhtml_branch_coverage=1 00:32:15.534 --rc genhtml_function_coverage=1 00:32:15.534 --rc genhtml_legend=1 00:32:15.534 --rc geninfo_all_blocks=1 00:32:15.534 --rc geninfo_unexecuted_blocks=1 00:32:15.534 00:32:15.534 ' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.534 --rc genhtml_branch_coverage=1 00:32:15.534 --rc genhtml_function_coverage=1 00:32:15.534 --rc genhtml_legend=1 00:32:15.534 --rc geninfo_all_blocks=1 00:32:15.534 --rc geninfo_unexecuted_blocks=1 00:32:15.534 00:32:15.534 ' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:15.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.534 --rc genhtml_branch_coverage=1 00:32:15.534 --rc genhtml_function_coverage=1 00:32:15.534 --rc genhtml_legend=1 00:32:15.534 --rc geninfo_all_blocks=1 00:32:15.534 --rc geninfo_unexecuted_blocks=1 00:32:15.534 00:32:15.534 ' 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.534 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:15.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.535 ************************************ 00:32:15.535 START TEST nvmf_multicontroller 00:32:15.535 ************************************ 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:15.535 * Looking for test storage... 00:32:15.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:32:15.535 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.797 --rc genhtml_branch_coverage=1 00:32:15.797 --rc genhtml_function_coverage=1 00:32:15.797 --rc genhtml_legend=1 00:32:15.797 --rc geninfo_all_blocks=1 00:32:15.797 --rc geninfo_unexecuted_blocks=1 00:32:15.797 00:32:15.797 ' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.797 --rc genhtml_branch_coverage=1 00:32:15.797 --rc genhtml_function_coverage=1 00:32:15.797 --rc genhtml_legend=1 00:32:15.797 --rc geninfo_all_blocks=1 00:32:15.797 --rc geninfo_unexecuted_blocks=1 00:32:15.797 00:32:15.797 ' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.797 --rc genhtml_branch_coverage=1 00:32:15.797 --rc genhtml_function_coverage=1 00:32:15.797 --rc genhtml_legend=1 00:32:15.797 --rc geninfo_all_blocks=1 00:32:15.797 --rc geninfo_unexecuted_blocks=1 00:32:15.797 00:32:15.797 ' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.797 --rc genhtml_branch_coverage=1 00:32:15.797 --rc genhtml_function_coverage=1 00:32:15.797 --rc genhtml_legend=1 00:32:15.797 --rc geninfo_all_blocks=1 00:32:15.797 --rc geninfo_unexecuted_blocks=1 00:32:15.797 00:32:15.797 ' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.797 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:15.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.798 13:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:23.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:23.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:23.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.937 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:23.938 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.938 13:02:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:23.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:32:23.938 00:32:23.938 --- 10.0.0.2 ping statistics --- 00:32:23.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.938 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:32:23.938 00:32:23.938 --- 10.0.0.1 ping statistics --- 00:32:23.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.938 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3555006 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3555006 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3555006 ']' 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.938 13:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:23.938 [2024-11-28 13:02:53.370754] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:23.938 [2024-11-28 13:02:53.370824] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.938 [2024-11-28 13:02:53.516494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:23.938 [2024-11-28 13:02:53.574983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:23.938 [2024-11-28 13:02:53.602654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.938 [2024-11-28 13:02:53.602698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.938 [2024-11-28 13:02:53.602707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.938 [2024-11-28 13:02:53.602714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.938 [2024-11-28 13:02:53.602721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.938 [2024-11-28 13:02:53.604452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.938 [2024-11-28 13:02:53.604680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.938 [2024-11-28 13:02:53.604681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.199 [2024-11-28 13:02:54.247873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.199 Malloc0 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.199 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.199 [2024-11-28 13:02:54.321521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 [2024-11-28 13:02:54.333315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 Malloc1 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3555079 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3555079 /var/tmp/bdevperf.sock 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3555079 ']' 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:24.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.461 13:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 NVMe0n1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.403 1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 request: 00:32:25.403 { 00:32:25.403 "name": "NVMe0", 00:32:25.403 "trtype": "tcp", 00:32:25.403 "traddr": "10.0.0.2", 00:32:25.403 "adrfam": "ipv4", 00:32:25.403 "trsvcid": "4420", 00:32:25.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.403 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:32:25.403 "hostaddr": "10.0.0.1", 00:32:25.403 "prchk_reftag": false, 00:32:25.403 "prchk_guard": false, 00:32:25.403 "hdgst": false, 00:32:25.403 "ddgst": false, 00:32:25.403 "allow_unrecognized_csi": false, 00:32:25.403 "method": "bdev_nvme_attach_controller", 00:32:25.403 "req_id": 1 00:32:25.403 } 00:32:25.403 Got JSON-RPC error response 00:32:25.403 response: 00:32:25.403 { 00:32:25.403 "code": -114, 00:32:25.403 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:25.403 } 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 request: 00:32:25.403 { 00:32:25.403 "name": "NVMe0", 00:32:25.403 "trtype": "tcp", 00:32:25.403 "traddr": "10.0.0.2", 00:32:25.403 "adrfam": "ipv4", 00:32:25.403 "trsvcid": "4420", 00:32:25.403 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:25.403 "hostaddr": "10.0.0.1", 00:32:25.403 "prchk_reftag": false, 00:32:25.403 "prchk_guard": false, 00:32:25.403 "hdgst": false, 00:32:25.403 "ddgst": false, 00:32:25.403 "allow_unrecognized_csi": false, 00:32:25.403 "method": "bdev_nvme_attach_controller", 00:32:25.403 "req_id": 1 00:32:25.403 } 00:32:25.403 Got JSON-RPC error response 00:32:25.403 response: 00:32:25.403 { 00:32:25.403 "code": -114, 00:32:25.403 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:25.403 } 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 request: 00:32:25.403 { 00:32:25.403 "name": "NVMe0", 00:32:25.403 "trtype": "tcp", 00:32:25.403 "traddr": "10.0.0.2", 00:32:25.403 "adrfam": "ipv4", 00:32:25.403 "trsvcid": "4420", 00:32:25.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.403 "hostaddr": "10.0.0.1", 00:32:25.403 "prchk_reftag": false, 00:32:25.403 "prchk_guard": false, 00:32:25.403 "hdgst": false, 00:32:25.403 "ddgst": false, 00:32:25.403 "multipath": "disable", 00:32:25.403 "allow_unrecognized_csi": false, 00:32:25.403 "method": "bdev_nvme_attach_controller", 00:32:25.403 "req_id": 1 00:32:25.403 } 00:32:25.403 Got JSON-RPC error response 00:32:25.403 response: 00:32:25.403 { 00:32:25.403 "code": -114, 00:32:25.403 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:32:25.403 } 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:25.403 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.404 request: 00:32:25.404 { 00:32:25.404 "name": "NVMe0", 00:32:25.404 "trtype": "tcp", 00:32:25.404 "traddr": "10.0.0.2", 00:32:25.404 "adrfam": "ipv4", 00:32:25.404 "trsvcid": "4420", 00:32:25.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.404 "hostaddr": "10.0.0.1", 00:32:25.404 "prchk_reftag": false, 00:32:25.404 "prchk_guard": false, 00:32:25.404 "hdgst": false, 00:32:25.404 "ddgst": false, 00:32:25.404 "multipath": "failover", 00:32:25.404 "allow_unrecognized_csi": false, 00:32:25.404 "method": "bdev_nvme_attach_controller", 00:32:25.404 "req_id": 1 00:32:25.404 } 00:32:25.404 Got JSON-RPC error response 00:32:25.404 response: 00:32:25.404 { 00:32:25.404 "code": -114, 00:32:25.404 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:25.404 } 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.404 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.664 NVMe0n1 00:32:25.664 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.665 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.925 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:32:25.925 13:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:27.462 { 00:32:27.462 "results": [ 00:32:27.462 { 00:32:27.462 "job": "NVMe0n1", 00:32:27.462 "core_mask": "0x1", 00:32:27.462 "workload": "write", 00:32:27.462 "status": "finished", 00:32:27.462 "queue_depth": 128, 00:32:27.462 "io_size": 4096, 00:32:27.462 "runtime": 1.005166, 00:32:27.462 "iops": 28149.57927347324, 00:32:27.462 "mibps": 109.95929403700484, 00:32:27.462 "io_failed": 0, 00:32:27.462 "io_timeout": 0, 00:32:27.462 "avg_latency_us": 4537.4401404419705, 00:32:27.462 "min_latency_us": 2381.2362178416306, 00:32:27.462 "max_latency_us": 13137.854994988305 00:32:27.462 } 00:32:27.462 ], 00:32:27.462 "core_count": 1 00:32:27.462 } 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3555079 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3555079 ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3555079 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3555079 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3555079' 00:32:27.462 killing process with pid 3555079 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3555079 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3555079 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:32:27.462 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:27.462 [2024-11-28 13:02:54.466082] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:27.462 [2024-11-28 13:02:54.466169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555079 ] 00:32:27.462 [2024-11-28 13:02:54.603772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:27.462 [2024-11-28 13:02:54.663530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.462 [2024-11-28 13:02:54.692214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.462 [2024-11-28 13:02:55.878421] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 5a5fd9c5-39b4-445d-99cc-b4f637cbbc0d already exists 00:32:27.462 [2024-11-28 13:02:55.878465] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:5a5fd9c5-39b4-445d-99cc-b4f637cbbc0d alias for bdev NVMe1n1 00:32:27.462 [2024-11-28 13:02:55.878475] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:32:27.462 Running I/O for 1 seconds... 00:32:27.462 28120.00 IOPS, 109.84 MiB/s 00:32:27.462 Latency(us) 00:32:27.462 [2024-11-28T12:02:57.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.462 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:27.462 NVMe0n1 : 1.01 28149.58 109.96 0.00 0.00 4537.44 2381.24 13137.85 00:32:27.462 [2024-11-28T12:02:57.589Z] =================================================================================================================== 00:32:27.462 [2024-11-28T12:02:57.589Z] Total : 28149.58 109.96 0.00 0.00 4537.44 2381.24 13137.85 00:32:27.462 Received shutdown signal, test time was about 1.000000 seconds 00:32:27.462 00:32:27.462 Latency(us) 00:32:27.462 [2024-11-28T12:02:57.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.462 [2024-11-28T12:02:57.589Z] =================================================================================================================== 00:32:27.462 [2024-11-28T12:02:57.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.462 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:27.462 rmmod nvme_tcp 00:32:27.462 rmmod nvme_fabrics 00:32:27.462 rmmod nvme_keyring 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3555006 ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3555006 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3555006 ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3555006 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3555006 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3555006' 00:32:27.462 killing process with pid 3555006 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3555006 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3555006 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:32:27.462 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:27.463 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:32:27.463 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:27.463 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:27.463 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.463 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.463 13:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.057 00:32:30.057 real 0m14.114s 00:32:30.057 user 0m16.964s 00:32:30.057 sys 0m6.588s 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:30.057 ************************************ 00:32:30.057 END TEST nvmf_multicontroller 00:32:30.057 ************************************ 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.057 ************************************ 00:32:30.057 START TEST nvmf_aer 00:32:30.057 ************************************ 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:30.057 * Looking for test storage... 00:32:30.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.057 --rc genhtml_branch_coverage=1 00:32:30.057 --rc genhtml_function_coverage=1 00:32:30.057 --rc genhtml_legend=1 00:32:30.057 --rc geninfo_all_blocks=1 00:32:30.057 --rc geninfo_unexecuted_blocks=1 00:32:30.057 00:32:30.057 ' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.057 --rc genhtml_branch_coverage=1 00:32:30.057 --rc genhtml_function_coverage=1 00:32:30.057 --rc genhtml_legend=1 00:32:30.057 --rc geninfo_all_blocks=1 00:32:30.057 --rc geninfo_unexecuted_blocks=1 00:32:30.057 00:32:30.057 ' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.057 --rc genhtml_branch_coverage=1 00:32:30.057 --rc genhtml_function_coverage=1 00:32:30.057 --rc genhtml_legend=1 00:32:30.057 --rc geninfo_all_blocks=1 00:32:30.057 --rc geninfo_unexecuted_blocks=1 00:32:30.057 00:32:30.057 ' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:30.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.057 --rc genhtml_branch_coverage=1 00:32:30.057 --rc genhtml_function_coverage=1 00:32:30.057 --rc genhtml_legend=1 00:32:30.057 --rc geninfo_all_blocks=1 00:32:30.057 --rc geninfo_unexecuted_blocks=1 00:32:30.057 00:32:30.057 ' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:30.057 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.058 13:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.196 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:38.197 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:38.197 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:38.197 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:38.197 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:32:38.197 00:32:38.197 --- 10.0.0.2 ping statistics --- 00:32:38.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.197 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:38.197 00:32:38.197 --- 10.0.0.1 ping statistics --- 00:32:38.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.197 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3560007 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3560007 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3560007 ']' 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.197 13:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.197 [2024-11-28 13:03:07.602617] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:38.197 [2024-11-28 13:03:07.602688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.197 [2024-11-28 13:03:07.747493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:38.197 [2024-11-28 13:03:07.808047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:38.197 [2024-11-28 13:03:07.836902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.197 [2024-11-28 13:03:07.836950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.197 [2024-11-28 13:03:07.836958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.197 [2024-11-28 13:03:07.836965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.197 [2024-11-28 13:03:07.836971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.197 [2024-11-28 13:03:07.839208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.197 [2024-11-28 13:03:07.839298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.197 [2024-11-28 13:03:07.839459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.197 [2024-11-28 13:03:07.839459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.459 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.459 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:32:38.459 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 [2024-11-28 13:03:08.479011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 Malloc0 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 [2024-11-28 13:03:08.552977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.460 [ 00:32:38.460 { 00:32:38.460 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:38.460 "subtype": "Discovery", 00:32:38.460 "listen_addresses": [], 00:32:38.460 "allow_any_host": true, 00:32:38.460 "hosts": [] 00:32:38.460 }, 00:32:38.460 { 00:32:38.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.460 "subtype": "NVMe", 00:32:38.460 "listen_addresses": [ 00:32:38.460 { 00:32:38.460 "trtype": "TCP", 00:32:38.460 "adrfam": "IPv4", 00:32:38.460 "traddr": "10.0.0.2", 00:32:38.460 "trsvcid": "4420" 00:32:38.460 } 00:32:38.460 ], 00:32:38.460 "allow_any_host": true, 00:32:38.460 "hosts": [], 00:32:38.460 "serial_number": "SPDK00000000000001", 00:32:38.460 "model_number": "SPDK bdev Controller", 00:32:38.460 "max_namespaces": 2, 00:32:38.460 "min_cntlid": 1, 00:32:38.460 "max_cntlid": 65519, 00:32:38.460 "namespaces": [ 00:32:38.460 { 00:32:38.460 "nsid": 1, 00:32:38.460 "bdev_name": "Malloc0", 00:32:38.460 "name": "Malloc0", 00:32:38.460 "nguid": "6902E380F710465B884C4B14F0A35B6D", 00:32:38.460 "uuid": "6902e380-f710-465b-884c-4b14f0a35b6d" 00:32:38.460 } 00:32:38.460 ] 00:32:38.460 } 00:32:38.460 ] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3560188 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:32:38.460 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:32:38.722 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 Malloc1 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 Asynchronous Event Request test 00:32:38.984 Attaching to 10.0.0.2 00:32:38.984 Attached to 10.0.0.2 00:32:38.984 Registering asynchronous event callbacks... 00:32:38.984 Starting namespace attribute notice tests for all controllers... 00:32:38.984 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:38.984 aer_cb - Changed Namespace 00:32:38.984 Cleaning up... 00:32:38.984 [ 00:32:38.984 { 00:32:38.984 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:38.984 "subtype": "Discovery", 00:32:38.984 "listen_addresses": [], 00:32:38.984 "allow_any_host": true, 00:32:38.984 "hosts": [] 00:32:38.984 }, 00:32:38.984 { 00:32:38.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.984 "subtype": "NVMe", 00:32:38.984 "listen_addresses": [ 00:32:38.984 { 00:32:38.984 "trtype": "TCP", 00:32:38.984 "adrfam": "IPv4", 00:32:38.984 "traddr": "10.0.0.2", 00:32:38.984 "trsvcid": "4420" 00:32:38.984 } 00:32:38.984 ], 00:32:38.984 "allow_any_host": true, 00:32:38.984 "hosts": [], 00:32:38.984 "serial_number": "SPDK00000000000001", 00:32:38.984 "model_number": "SPDK bdev Controller", 00:32:38.984 "max_namespaces": 2, 00:32:38.984 "min_cntlid": 1, 00:32:38.984 "max_cntlid": 65519, 00:32:38.984 "namespaces": [ 00:32:38.984 { 00:32:38.984 "nsid": 1, 00:32:38.984 "bdev_name": "Malloc0", 00:32:38.984 "name": "Malloc0", 00:32:38.984 "nguid": "6902E380F710465B884C4B14F0A35B6D", 00:32:38.984 "uuid": "6902e380-f710-465b-884c-4b14f0a35b6d" 00:32:38.984 }, 00:32:38.984 { 00:32:38.984 "nsid": 2, 00:32:38.984 "bdev_name": "Malloc1", 00:32:38.984 "name": "Malloc1", 00:32:38.984 "nguid": "05F0948032D9481A8B291C9BA0AB627F", 00:32:38.984 "uuid": "05f09480-32d9-481a-8b29-1c9ba0ab627f" 00:32:38.984 } 00:32:38.984 ] 00:32:38.984 } 00:32:38.984 ] 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3560188 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.984 13:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.984 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.984 rmmod nvme_tcp 00:32:38.984 rmmod nvme_fabrics 00:32:38.984 rmmod nvme_keyring 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3560007 ']' 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3560007 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3560007 ']' 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3560007 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560007 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560007' 00:32:39.246 killing process with pid 3560007 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3560007 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3560007 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.246 13:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.793 00:32:41.793 real 0m11.753s 00:32:41.793 user 0m8.362s 00:32:41.793 sys 0m6.265s 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:41.793 ************************************ 00:32:41.793 END TEST nvmf_aer 00:32:41.793 ************************************ 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.793 13:03:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.793 ************************************ 00:32:41.793 START TEST nvmf_async_init 00:32:41.793 ************************************ 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:41.794 * Looking for test storage... 00:32:41.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.794 --rc genhtml_branch_coverage=1 00:32:41.794 --rc genhtml_function_coverage=1 00:32:41.794 --rc genhtml_legend=1 00:32:41.794 --rc geninfo_all_blocks=1 00:32:41.794 --rc geninfo_unexecuted_blocks=1 00:32:41.794 00:32:41.794 ' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.794 --rc genhtml_branch_coverage=1 00:32:41.794 --rc genhtml_function_coverage=1 00:32:41.794 --rc genhtml_legend=1 00:32:41.794 --rc geninfo_all_blocks=1 00:32:41.794 --rc geninfo_unexecuted_blocks=1 00:32:41.794 00:32:41.794 ' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.794 --rc genhtml_branch_coverage=1 00:32:41.794 --rc genhtml_function_coverage=1 00:32:41.794 --rc genhtml_legend=1 00:32:41.794 --rc geninfo_all_blocks=1 00:32:41.794 --rc geninfo_unexecuted_blocks=1 00:32:41.794 00:32:41.794 ' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.794 --rc genhtml_branch_coverage=1 00:32:41.794 --rc genhtml_function_coverage=1 00:32:41.794 --rc genhtml_legend=1 00:32:41.794 --rc geninfo_all_blocks=1 00:32:41.794 --rc geninfo_unexecuted_blocks=1 00:32:41.794 00:32:41.794 ' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:41.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:41.794 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=191a7e084cd24a7ba24d3941c48f3ad6 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.795 13:03:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:49.941 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:49.941 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:49.941 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:49.941 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:49.942 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.942 13:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:49.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:32:49.942 00:32:49.942 --- 10.0.0.2 ping statistics --- 00:32:49.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.942 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:49.942 00:32:49.942 --- 10.0.0.1 ping statistics --- 00:32:49.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.942 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3564984 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3564984 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3564984 ']' 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.942 13:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:49.942 [2024-11-28 13:03:19.381770] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:49.942 [2024-11-28 13:03:19.381856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:49.942 [2024-11-28 13:03:19.526429] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:49.942 [2024-11-28 13:03:19.584012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.942 [2024-11-28 13:03:19.610480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.942 [2024-11-28 13:03:19.610524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.942 [2024-11-28 13:03:19.610532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:49.942 [2024-11-28 13:03:19.610539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:49.942 [2024-11-28 13:03:19.610546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.942 [2024-11-28 13:03:19.611326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.203 [2024-11-28 13:03:20.256131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.203 null0 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.203 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 191a7e084cd24a7ba24d3941c48f3ad6 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.204 [2024-11-28 13:03:20.316367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.204 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.465 nvme0n1 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.465 [ 00:32:50.465 { 00:32:50.465 "name": "nvme0n1", 00:32:50.465 "aliases": [ 00:32:50.465 "191a7e08-4cd2-4a7b-a24d-3941c48f3ad6" 00:32:50.465 ], 00:32:50.465 "product_name": "NVMe disk", 00:32:50.465 "block_size": 512, 00:32:50.465 "num_blocks": 2097152, 00:32:50.465 "uuid": "191a7e08-4cd2-4a7b-a24d-3941c48f3ad6", 00:32:50.465 "numa_id": 0, 00:32:50.465 "assigned_rate_limits": { 00:32:50.465 "rw_ios_per_sec": 0, 00:32:50.465 "rw_mbytes_per_sec": 0, 00:32:50.465 "r_mbytes_per_sec": 0, 00:32:50.465 "w_mbytes_per_sec": 0 00:32:50.465 }, 00:32:50.465 "claimed": false, 00:32:50.465 "zoned": false, 00:32:50.465 "supported_io_types": { 00:32:50.465 "read": true, 00:32:50.465 "write": true, 00:32:50.465 "unmap": false, 00:32:50.465 "flush": true, 00:32:50.465 "reset": true, 00:32:50.465 "nvme_admin": true, 00:32:50.465 "nvme_io": true, 00:32:50.465 "nvme_io_md": false, 00:32:50.465 "write_zeroes": true, 00:32:50.465 "zcopy": false, 00:32:50.465 "get_zone_info": false, 00:32:50.465 "zone_management": false, 00:32:50.465 "zone_append": false, 00:32:50.465 "compare": true, 00:32:50.465 "compare_and_write": true, 00:32:50.465 "abort": true, 00:32:50.465 "seek_hole": false, 00:32:50.465 "seek_data": false, 00:32:50.465 "copy": true, 00:32:50.465 "nvme_iov_md": false 00:32:50.465 }, 00:32:50.465 "memory_domains": [ 00:32:50.465 { 00:32:50.465 "dma_device_id": "system", 00:32:50.465 "dma_device_type": 1 00:32:50.465 } 00:32:50.465 ], 00:32:50.465 "driver_specific": { 00:32:50.465 "nvme": [ 00:32:50.465 { 00:32:50.465 "trid": { 00:32:50.465 "trtype": "TCP", 00:32:50.465 "adrfam": "IPv4", 00:32:50.465 "traddr": "10.0.0.2", 00:32:50.465 "trsvcid": "4420", 00:32:50.465 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:50.465 }, 00:32:50.465 "ctrlr_data": { 00:32:50.465 "cntlid": 1, 00:32:50.465 "vendor_id": "0x8086", 00:32:50.465 "model_number": "SPDK bdev Controller", 00:32:50.465 "serial_number": "00000000000000000000", 00:32:50.465 "firmware_revision": "25.01", 00:32:50.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.465 "oacs": { 00:32:50.465 "security": 0, 00:32:50.465 "format": 0, 00:32:50.465 "firmware": 0, 00:32:50.465 "ns_manage": 0 00:32:50.465 }, 00:32:50.465 "multi_ctrlr": true, 00:32:50.465 "ana_reporting": false 00:32:50.465 }, 00:32:50.465 "vs": { 00:32:50.465 "nvme_version": "1.3" 00:32:50.465 }, 00:32:50.465 "ns_data": { 00:32:50.465 "id": 1, 00:32:50.465 "can_share": true 00:32:50.465 } 00:32:50.465 } 00:32:50.465 ], 00:32:50.465 "mp_policy": "active_passive" 00:32:50.465 } 00:32:50.465 } 00:32:50.465 ] 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.465 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.728 [2024-11-28 13:03:20.592166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.728 [2024-11-28 13:03:20.592250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ca0e0 (9): Bad file descriptor 00:32:50.728 [2024-11-28 13:03:20.724281] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.728 [ 00:32:50.728 { 00:32:50.728 "name": "nvme0n1", 00:32:50.728 "aliases": [ 00:32:50.728 "191a7e08-4cd2-4a7b-a24d-3941c48f3ad6" 00:32:50.728 ], 00:32:50.728 "product_name": "NVMe disk", 00:32:50.728 "block_size": 512, 00:32:50.728 "num_blocks": 2097152, 00:32:50.728 "uuid": "191a7e08-4cd2-4a7b-a24d-3941c48f3ad6", 00:32:50.728 "numa_id": 0, 00:32:50.728 "assigned_rate_limits": { 00:32:50.728 "rw_ios_per_sec": 0, 00:32:50.728 "rw_mbytes_per_sec": 0, 00:32:50.728 "r_mbytes_per_sec": 0, 00:32:50.728 "w_mbytes_per_sec": 0 00:32:50.728 }, 00:32:50.728 "claimed": false, 00:32:50.728 "zoned": false, 00:32:50.728 "supported_io_types": { 00:32:50.728 "read": true, 00:32:50.728 "write": true, 00:32:50.728 "unmap": false, 00:32:50.728 "flush": true, 00:32:50.728 "reset": true, 00:32:50.728 "nvme_admin": true, 00:32:50.728 "nvme_io": true, 00:32:50.728 "nvme_io_md": false, 00:32:50.728 "write_zeroes": true, 00:32:50.728 "zcopy": false, 00:32:50.728 "get_zone_info": false, 00:32:50.728 "zone_management": false, 00:32:50.728 "zone_append": false, 00:32:50.728 "compare": true, 00:32:50.728 "compare_and_write": true, 00:32:50.728 "abort": true, 00:32:50.728 "seek_hole": false, 00:32:50.728 "seek_data": false, 00:32:50.728 "copy": true, 00:32:50.728 "nvme_iov_md": false 00:32:50.728 }, 00:32:50.728 "memory_domains": [ 00:32:50.728 { 00:32:50.728 "dma_device_id": "system", 00:32:50.728 "dma_device_type": 1 00:32:50.728 } 00:32:50.728 ], 00:32:50.728 "driver_specific": { 00:32:50.728 "nvme": [ 00:32:50.728 { 00:32:50.728 "trid": { 00:32:50.728 "trtype": "TCP", 00:32:50.728 "adrfam": "IPv4", 00:32:50.728 "traddr": "10.0.0.2", 00:32:50.728 "trsvcid": "4420", 00:32:50.728 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:50.728 }, 00:32:50.728 "ctrlr_data": { 00:32:50.728 "cntlid": 2, 00:32:50.728 "vendor_id": "0x8086", 00:32:50.728 "model_number": "SPDK bdev Controller", 00:32:50.728 "serial_number": "00000000000000000000", 00:32:50.728 "firmware_revision": "25.01", 00:32:50.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.728 "oacs": { 00:32:50.728 "security": 0, 00:32:50.728 "format": 0, 00:32:50.728 "firmware": 0, 00:32:50.728 "ns_manage": 0 00:32:50.728 }, 00:32:50.728 "multi_ctrlr": true, 00:32:50.728 "ana_reporting": false 00:32:50.728 }, 00:32:50.728 "vs": { 00:32:50.728 "nvme_version": "1.3" 00:32:50.728 }, 00:32:50.728 "ns_data": { 00:32:50.728 "id": 1, 00:32:50.728 "can_share": true 00:32:50.728 } 00:32:50.728 } 00:32:50.728 ], 00:32:50.728 "mp_policy": "active_passive" 00:32:50.728 } 00:32:50.728 } 00:32:50.728 ] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8MrEbisROs 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8MrEbisROs 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.8MrEbisROs 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.728 [2024-11-28 13:03:20.816356] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:50.728 [2024-11-28 13:03:20.816514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.728 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.729 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.729 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:50.729 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.729 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.729 [2024-11-28 13:03:20.840381] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:50.990 nvme0n1 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.990 [ 00:32:50.990 { 00:32:50.990 "name": "nvme0n1", 00:32:50.990 "aliases": [ 00:32:50.990 "191a7e08-4cd2-4a7b-a24d-3941c48f3ad6" 00:32:50.990 ], 00:32:50.990 "product_name": "NVMe disk", 00:32:50.990 "block_size": 512, 00:32:50.990 "num_blocks": 2097152, 00:32:50.990 "uuid": "191a7e08-4cd2-4a7b-a24d-3941c48f3ad6", 00:32:50.990 "numa_id": 0, 00:32:50.990 "assigned_rate_limits": { 00:32:50.990 "rw_ios_per_sec": 0, 00:32:50.990 "rw_mbytes_per_sec": 0, 00:32:50.990 "r_mbytes_per_sec": 0, 00:32:50.990 "w_mbytes_per_sec": 0 00:32:50.990 }, 00:32:50.990 "claimed": false, 00:32:50.990 "zoned": false, 00:32:50.990 "supported_io_types": { 00:32:50.990 "read": true, 00:32:50.990 "write": true, 00:32:50.990 "unmap": false, 00:32:50.990 "flush": true, 00:32:50.990 "reset": true, 00:32:50.990 "nvme_admin": true, 00:32:50.990 "nvme_io": true, 00:32:50.990 "nvme_io_md": false, 00:32:50.990 "write_zeroes": true, 00:32:50.990 "zcopy": false, 00:32:50.990 "get_zone_info": false, 00:32:50.990 "zone_management": false, 00:32:50.990 "zone_append": false, 00:32:50.990 "compare": true, 00:32:50.990 "compare_and_write": true, 00:32:50.990 "abort": true, 00:32:50.990 "seek_hole": false, 00:32:50.990 "seek_data": false, 00:32:50.990 "copy": true, 00:32:50.990 "nvme_iov_md": false 00:32:50.990 }, 00:32:50.990 "memory_domains": [ 00:32:50.990 { 00:32:50.990 "dma_device_id": "system", 00:32:50.990 "dma_device_type": 1 00:32:50.990 } 00:32:50.990 ], 00:32:50.990 "driver_specific": { 00:32:50.990 "nvme": [ 00:32:50.990 { 00:32:50.990 "trid": { 00:32:50.990 "trtype": "TCP", 00:32:50.990 "adrfam": "IPv4", 00:32:50.990 "traddr": "10.0.0.2", 00:32:50.990 "trsvcid": "4421", 00:32:50.990 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:50.990 }, 00:32:50.990 "ctrlr_data": { 00:32:50.990 "cntlid": 3, 00:32:50.990 "vendor_id": "0x8086", 00:32:50.990 "model_number": "SPDK bdev Controller", 00:32:50.990 "serial_number": "00000000000000000000", 00:32:50.990 "firmware_revision": "25.01", 00:32:50.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.990 "oacs": { 00:32:50.990 "security": 0, 00:32:50.990 "format": 0, 00:32:50.990 "firmware": 0, 00:32:50.990 "ns_manage": 0 00:32:50.990 }, 00:32:50.990 "multi_ctrlr": true, 00:32:50.990 "ana_reporting": false 00:32:50.990 }, 00:32:50.990 "vs": { 00:32:50.990 "nvme_version": "1.3" 00:32:50.990 }, 00:32:50.990 "ns_data": { 00:32:50.990 "id": 1, 00:32:50.990 "can_share": true 00:32:50.990 } 00:32:50.990 } 00:32:50.990 ], 00:32:50.990 "mp_policy": "active_passive" 00:32:50.990 } 00:32:50.990 } 00:32:50.990 ] 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.8MrEbisROs 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.990 13:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.990 rmmod nvme_tcp 00:32:50.990 rmmod nvme_fabrics 00:32:50.990 rmmod nvme_keyring 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3564984 ']' 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3564984 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3564984 ']' 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3564984 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.990 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3564984 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3564984' 00:32:51.252 killing process with pid 3564984 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3564984 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3564984 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.252 13:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.799 00:32:53.799 real 0m11.819s 00:32:53.799 user 0m4.137s 00:32:53.799 sys 0m6.182s 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:53.799 ************************************ 00:32:53.799 END TEST nvmf_async_init 00:32:53.799 ************************************ 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.799 ************************************ 00:32:53.799 START TEST dma 00:32:53.799 ************************************ 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:53.799 * Looking for test storage... 00:32:53.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.799 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.799 --rc genhtml_branch_coverage=1 00:32:53.799 --rc genhtml_function_coverage=1 00:32:53.799 --rc genhtml_legend=1 00:32:53.800 --rc geninfo_all_blocks=1 00:32:53.800 --rc geninfo_unexecuted_blocks=1 00:32:53.800 00:32:53.800 ' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:53.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.800 --rc genhtml_branch_coverage=1 00:32:53.800 --rc genhtml_function_coverage=1 00:32:53.800 --rc genhtml_legend=1 00:32:53.800 --rc geninfo_all_blocks=1 00:32:53.800 --rc geninfo_unexecuted_blocks=1 00:32:53.800 00:32:53.800 ' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:53.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.800 --rc genhtml_branch_coverage=1 00:32:53.800 --rc genhtml_function_coverage=1 00:32:53.800 --rc genhtml_legend=1 00:32:53.800 --rc geninfo_all_blocks=1 00:32:53.800 --rc geninfo_unexecuted_blocks=1 00:32:53.800 00:32:53.800 ' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:53.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.800 --rc genhtml_branch_coverage=1 00:32:53.800 --rc genhtml_function_coverage=1 00:32:53.800 --rc genhtml_legend=1 00:32:53.800 --rc geninfo_all_blocks=1 00:32:53.800 --rc geninfo_unexecuted_blocks=1 00:32:53.800 00:32:53.800 ' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:53.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:32:53.800 00:32:53.800 real 0m0.240s 00:32:53.800 user 0m0.147s 00:32:53.800 sys 0m0.108s 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:32:53.800 ************************************ 00:32:53.800 END TEST dma 00:32:53.800 ************************************ 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.800 ************************************ 00:32:53.800 START TEST nvmf_identify 00:32:53.800 ************************************ 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:53.800 * Looking for test storage... 00:32:53.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.800 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:54.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.061 --rc genhtml_branch_coverage=1 00:32:54.061 --rc genhtml_function_coverage=1 00:32:54.061 --rc genhtml_legend=1 00:32:54.061 --rc geninfo_all_blocks=1 00:32:54.061 --rc geninfo_unexecuted_blocks=1 00:32:54.061 00:32:54.061 ' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:54.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.061 --rc genhtml_branch_coverage=1 00:32:54.061 --rc genhtml_function_coverage=1 00:32:54.061 --rc genhtml_legend=1 00:32:54.061 --rc geninfo_all_blocks=1 00:32:54.061 --rc geninfo_unexecuted_blocks=1 00:32:54.061 00:32:54.061 ' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:54.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.061 --rc genhtml_branch_coverage=1 00:32:54.061 --rc genhtml_function_coverage=1 00:32:54.061 --rc genhtml_legend=1 00:32:54.061 --rc geninfo_all_blocks=1 00:32:54.061 --rc geninfo_unexecuted_blocks=1 00:32:54.061 00:32:54.061 ' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:54.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.061 --rc genhtml_branch_coverage=1 00:32:54.061 --rc genhtml_function_coverage=1 00:32:54.061 --rc genhtml_legend=1 00:32:54.061 --rc geninfo_all_blocks=1 00:32:54.061 --rc geninfo_unexecuted_blocks=1 00:32:54.061 00:32:54.061 ' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.061 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.062 13:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.062 13:03:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:02.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:02.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:02.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:02.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.204 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:02.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:33:02.205 00:33:02.205 --- 10.0.0.2 ping statistics --- 00:33:02.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.205 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:33:02.205 00:33:02.205 --- 10.0.0.1 ping statistics --- 00:33:02.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.205 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3569631 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3569631 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3569631 ']' 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.205 13:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.205 [2024-11-28 13:03:31.697129] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:33:02.205 [2024-11-28 13:03:31.697202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.205 [2024-11-28 13:03:31.843137] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:02.205 [2024-11-28 13:03:31.900696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.205 [2024-11-28 13:03:31.930063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.205 [2024-11-28 13:03:31.930109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.205 [2024-11-28 13:03:31.930118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.205 [2024-11-28 13:03:31.930125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.205 [2024-11-28 13:03:31.930131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.205 [2024-11-28 13:03:31.932087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.205 [2024-11-28 13:03:31.932244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.205 [2024-11-28 13:03:31.932297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.205 [2024-11-28 13:03:31.932297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.467 [2024-11-28 13:03:32.536369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.467 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.728 Malloc0 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.728 [2024-11-28 13:03:32.653795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.728 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:02.728 [ 00:33:02.728 { 00:33:02.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:02.728 "subtype": "Discovery", 00:33:02.728 "listen_addresses": [ 00:33:02.728 { 00:33:02.728 "trtype": "TCP", 00:33:02.728 "adrfam": "IPv4", 00:33:02.728 "traddr": "10.0.0.2", 00:33:02.728 "trsvcid": "4420" 00:33:02.728 } 00:33:02.728 ], 00:33:02.728 "allow_any_host": true, 00:33:02.728 "hosts": [] 00:33:02.728 }, 00:33:02.728 { 00:33:02.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.728 "subtype": "NVMe", 00:33:02.728 "listen_addresses": [ 00:33:02.728 { 00:33:02.728 "trtype": "TCP", 00:33:02.728 "adrfam": "IPv4", 00:33:02.728 "traddr": "10.0.0.2", 00:33:02.728 "trsvcid": "4420" 00:33:02.728 } 00:33:02.728 ], 00:33:02.728 "allow_any_host": true, 00:33:02.728 "hosts": [], 00:33:02.728 "serial_number": "SPDK00000000000001", 00:33:02.728 "model_number": "SPDK bdev Controller", 00:33:02.728 "max_namespaces": 32, 00:33:02.728 "min_cntlid": 1, 00:33:02.728 "max_cntlid": 65519, 00:33:02.728 "namespaces": [ 00:33:02.728 { 00:33:02.728 "nsid": 1, 00:33:02.728 "bdev_name": "Malloc0", 00:33:02.728 "name": "Malloc0", 00:33:02.729 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:33:02.729 "eui64": "ABCDEF0123456789", 00:33:02.729 "uuid": "e394d528-4f23-4693-b2e4-0a938d3ffd7e" 00:33:02.729 } 00:33:02.729 ] 00:33:02.729 } 00:33:02.729 ] 00:33:02.729 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.729 13:03:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:33:02.729 [2024-11-28 13:03:32.720296] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:33:02.729 [2024-11-28 13:03:32.720339] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569748 ] 00:33:02.729 [2024-11-28 13:03:32.836961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:02.993 [2024-11-28 13:03:32.878987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:33:02.993 [2024-11-28 13:03:32.879056] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:02.993 [2024-11-28 13:03:32.879061] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:02.993 [2024-11-28 13:03:32.879080] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:02.993 [2024-11-28 13:03:32.879091] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:02.993 [2024-11-28 13:03:32.883670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:33:02.993 [2024-11-28 13:03:32.883725] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2154d10 0 00:33:02.993 [2024-11-28 13:03:32.891177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:02.993 [2024-11-28 13:03:32.891198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:02.993 [2024-11-28 13:03:32.891203] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:02.993 [2024-11-28 13:03:32.891206] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:02.993 [2024-11-28 13:03:32.891251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.891258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.891263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.993 [2024-11-28 13:03:32.891281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:02.993 [2024-11-28 13:03:32.891305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.993 [2024-11-28 13:03:32.899172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.993 [2024-11-28 13:03:32.899184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.993 [2024-11-28 13:03:32.899188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.993 [2024-11-28 13:03:32.899207] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:02.993 [2024-11-28 13:03:32.899215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:33:02.993 [2024-11-28 13:03:32.899221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:33:02.993 [2024-11-28 13:03:32.899239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.993 [2024-11-28 13:03:32.899256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.993 [2024-11-28 13:03:32.899273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.993 [2024-11-28 13:03:32.899499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.993 [2024-11-28 13:03:32.899506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.993 [2024-11-28 13:03:32.899510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.993 [2024-11-28 13:03:32.899526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:33:02.993 [2024-11-28 13:03:32.899534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:33:02.993 [2024-11-28 13:03:32.899541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.993 [2024-11-28 13:03:32.899555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.993 [2024-11-28 13:03:32.899566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.993 [2024-11-28 13:03:32.899792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.993 [2024-11-28 13:03:32.899799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.993 [2024-11-28 13:03:32.899803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.993 [2024-11-28 13:03:32.899812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:33:02.993 [2024-11-28 13:03:32.899821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:33:02.993 [2024-11-28 13:03:32.899828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.899835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.993 [2024-11-28 13:03:32.899842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.993 [2024-11-28 13:03:32.899853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.993 [2024-11-28 13:03:32.900044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.993 [2024-11-28 13:03:32.900051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.993 [2024-11-28 13:03:32.900054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.900058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.993 [2024-11-28 13:03:32.900064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:02.993 [2024-11-28 13:03:32.900074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.900078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.900081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.993 [2024-11-28 13:03:32.900088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.993 [2024-11-28 13:03:32.900098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.993 [2024-11-28 13:03:32.900315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.993 [2024-11-28 13:03:32.900322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.993 [2024-11-28 13:03:32.900326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.993 [2024-11-28 13:03:32.900330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.993 [2024-11-28 13:03:32.900335] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:33:02.993 [2024-11-28 13:03:32.900340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:33:02.993 [2024-11-28 13:03:32.900348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:02.993 [2024-11-28 13:03:32.900454] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:33:02.994 [2024-11-28 13:03:32.900458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:02.994 [2024-11-28 13:03:32.900469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.900472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.900476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.900483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.994 [2024-11-28 13:03:32.900493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.994 [2024-11-28 13:03:32.900710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.994 [2024-11-28 13:03:32.900718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.994 [2024-11-28 13:03:32.900721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.900725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.994 [2024-11-28 13:03:32.900731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:02.994 [2024-11-28 13:03:32.900740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.900744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.900748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.900755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.994 [2024-11-28 13:03:32.900769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.994 [2024-11-28 13:03:32.900963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.994 [2024-11-28 13:03:32.900970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.994 [2024-11-28 13:03:32.900973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.900977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.994 [2024-11-28 13:03:32.900982] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:02.994 [2024-11-28 13:03:32.900987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:33:02.994 [2024-11-28 13:03:32.900995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:33:02.994 [2024-11-28 13:03:32.901011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:33:02.994 [2024-11-28 13:03:32.901021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.901025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.901032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.994 [2024-11-28 13:03:32.901045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.994 [2024-11-28 13:03:32.901296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:02.994 [2024-11-28 13:03:32.901305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:02.994 [2024-11-28 13:03:32.901310] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.901315] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2154d10): datao=0, datal=4096, cccid=0 00:33:02.994 [2024-11-28 13:03:32.901320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0700) on tqpair(0x2154d10): expected_datao=0, payload_size=4096 00:33:02.994 [2024-11-28 13:03:32.901324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.901356] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.901361] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.994 [2024-11-28 13:03:32.947183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.994 [2024-11-28 13:03:32.947186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.994 [2024-11-28 13:03:32.947201] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:33:02.994 [2024-11-28 13:03:32.947206] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:33:02.994 [2024-11-28 13:03:32.947211] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:33:02.994 [2024-11-28 13:03:32.947217] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:33:02.994 [2024-11-28 13:03:32.947222] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:33:02.994 [2024-11-28 13:03:32.947227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:33:02.994 [2024-11-28 13:03:32.947237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:33:02.994 [2024-11-28 13:03:32.947249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:02.994 [2024-11-28 13:03:32.947280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.994 [2024-11-28 13:03:32.947500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.994 [2024-11-28 13:03:32.947506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.994 [2024-11-28 13:03:32.947510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.994 [2024-11-28 13:03:32.947523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.994 [2024-11-28 13:03:32.947543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.994 [2024-11-28 13:03:32.947562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.994 [2024-11-28 13:03:32.947581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.994 [2024-11-28 13:03:32.947599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:02.994 [2024-11-28 13:03:32.947612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:02.994 [2024-11-28 13:03:32.947619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.994 [2024-11-28 13:03:32.947642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 0, qid 0 00:33:02.994 [2024-11-28 13:03:32.947648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0880, cid 1, qid 0 00:33:02.994 [2024-11-28 13:03:32.947653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0a00, cid 2, qid 0 00:33:02.994 [2024-11-28 13:03:32.947661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.994 [2024-11-28 13:03:32.947666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0d00, cid 4, qid 0 00:33:02.994 [2024-11-28 13:03:32.947908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.994 [2024-11-28 13:03:32.947915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.994 [2024-11-28 13:03:32.947918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0d00) on tqpair=0x2154d10 00:33:02.994 [2024-11-28 13:03:32.947928] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:33:02.994 [2024-11-28 13:03:32.947933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:33:02.994 [2024-11-28 13:03:32.947945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.947949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2154d10) 00:33:02.994 [2024-11-28 13:03:32.947956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.994 [2024-11-28 13:03:32.947966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0d00, cid 4, qid 0 00:33:02.994 [2024-11-28 13:03:32.948240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:02.994 [2024-11-28 13:03:32.948247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:02.994 [2024-11-28 13:03:32.948251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:02.994 [2024-11-28 13:03:32.948255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2154d10): datao=0, datal=4096, cccid=4 00:33:02.994 [2024-11-28 13:03:32.948259] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0d00) on tqpair(0x2154d10): expected_datao=0, payload_size=4096 00:33:02.994 [2024-11-28 13:03:32.948264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948275] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.995 [2024-11-28 13:03:32.948409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.995 [2024-11-28 13:03:32.948413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0d00) on tqpair=0x2154d10 00:33:02.995 [2024-11-28 13:03:32.948431] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:33:02.995 [2024-11-28 13:03:32.948459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2154d10) 00:33:02.995 [2024-11-28 13:03:32.948470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.995 [2024-11-28 13:03:32.948478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2154d10) 00:33:02.995 [2024-11-28 13:03:32.948492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.995 [2024-11-28 13:03:32.948512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0d00, cid 4, qid 0 00:33:02.995 [2024-11-28 13:03:32.948517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0e80, cid 5, qid 0 00:33:02.995 [2024-11-28 13:03:32.948787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:02.995 [2024-11-28 13:03:32.948793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:02.995 [2024-11-28 13:03:32.948797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948801] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2154d10): datao=0, datal=1024, cccid=4 00:33:02.995 [2024-11-28 13:03:32.948805] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0d00) on tqpair(0x2154d10): expected_datao=0, payload_size=1024 00:33:02.995 [2024-11-28 13:03:32.948810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948816] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948820] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.995 [2024-11-28 13:03:32.948832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.995 [2024-11-28 13:03:32.948835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.948839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0e80) on tqpair=0x2154d10 00:33:02.995 [2024-11-28 13:03:32.989355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.995 [2024-11-28 13:03:32.989369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.995 [2024-11-28 13:03:32.989372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0d00) on tqpair=0x2154d10 00:33:02.995 [2024-11-28 13:03:32.989393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2154d10) 00:33:02.995 [2024-11-28 13:03:32.989405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.995 [2024-11-28 13:03:32.989421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0d00, cid 4, qid 0 00:33:02.995 [2024-11-28 13:03:32.989609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:02.995 [2024-11-28 13:03:32.989616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:02.995 [2024-11-28 13:03:32.989620] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989624] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2154d10): datao=0, datal=3072, cccid=4 00:33:02.995 [2024-11-28 13:03:32.989628] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0d00) on tqpair(0x2154d10): expected_datao=0, payload_size=3072 00:33:02.995 [2024-11-28 13:03:32.989632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989640] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989643] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.995 [2024-11-28 13:03:32.989864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.995 [2024-11-28 13:03:32.989868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0d00) on tqpair=0x2154d10 00:33:02.995 [2024-11-28 13:03:32.989880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.989884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2154d10) 00:33:02.995 [2024-11-28 13:03:32.989890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.995 [2024-11-28 13:03:32.989904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0d00, cid 4, qid 0 00:33:02.995 [2024-11-28 13:03:32.990110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:02.995 [2024-11-28 13:03:32.990120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:02.995 [2024-11-28 13:03:32.990123] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.990127] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2154d10): datao=0, datal=8, cccid=4 00:33:02.995 [2024-11-28 13:03:32.990131] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0d00) on tqpair(0x2154d10): expected_datao=0, payload_size=8 00:33:02.995 [2024-11-28 13:03:32.990136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.990142] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:32.990147] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:33.035171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.995 [2024-11-28 13:03:33.035182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.995 [2024-11-28 13:03:33.035186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.995 [2024-11-28 13:03:33.035190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0d00) on tqpair=0x2154d10 00:33:02.995 ===================================================== 00:33:02.995 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:02.995 ===================================================== 00:33:02.995 Controller Capabilities/Features 00:33:02.995 ================================ 00:33:02.995 Vendor ID: 0000 00:33:02.995 Subsystem Vendor ID: 0000 00:33:02.995 Serial Number: .................... 00:33:02.995 Model Number: ........................................ 00:33:02.995 Firmware Version: 25.01 00:33:02.995 Recommended Arb Burst: 0 00:33:02.995 IEEE OUI Identifier: 00 00 00 00:33:02.995 Multi-path I/O 00:33:02.995 May have multiple subsystem ports: No 00:33:02.995 May have multiple controllers: No 00:33:02.995 Associated with SR-IOV VF: No 00:33:02.995 Max Data Transfer Size: 131072 00:33:02.995 Max Number of Namespaces: 0 00:33:02.995 Max Number of I/O Queues: 1024 00:33:02.995 NVMe Specification Version (VS): 1.3 00:33:02.995 NVMe Specification Version (Identify): 1.3 00:33:02.995 Maximum Queue Entries: 128 00:33:02.995 Contiguous Queues Required: Yes 00:33:02.995 Arbitration Mechanisms Supported 00:33:02.995 Weighted Round Robin: Not Supported 00:33:02.995 Vendor Specific: Not Supported 00:33:02.995 Reset Timeout: 15000 ms 00:33:02.995 Doorbell Stride: 4 bytes 00:33:02.995 NVM Subsystem Reset: Not Supported 00:33:02.995 Command Sets Supported 00:33:02.995 NVM Command Set: Supported 00:33:02.995 Boot Partition: Not Supported 00:33:02.995 Memory Page Size Minimum: 4096 bytes 00:33:02.995 Memory Page Size Maximum: 4096 bytes 00:33:02.995 Persistent Memory Region: Not Supported 00:33:02.995 Optional Asynchronous Events Supported 00:33:02.995 Namespace Attribute Notices: Not Supported 00:33:02.995 Firmware Activation Notices: Not Supported 00:33:02.995 ANA Change Notices: Not Supported 00:33:02.995 PLE Aggregate Log Change Notices: Not Supported 00:33:02.995 LBA Status Info Alert Notices: Not Supported 00:33:02.995 EGE Aggregate Log Change Notices: Not Supported 00:33:02.995 Normal NVM Subsystem Shutdown event: Not Supported 00:33:02.995 Zone Descriptor Change Notices: Not Supported 00:33:02.995 Discovery Log Change Notices: Supported 00:33:02.995 Controller Attributes 00:33:02.995 128-bit Host Identifier: Not Supported 00:33:02.995 Non-Operational Permissive Mode: Not Supported 00:33:02.995 NVM Sets: Not Supported 00:33:02.995 Read Recovery Levels: Not Supported 00:33:02.995 Endurance Groups: Not Supported 00:33:02.995 Predictable Latency Mode: Not Supported 00:33:02.995 Traffic Based Keep ALive: Not Supported 00:33:02.995 Namespace Granularity: Not Supported 00:33:02.995 SQ Associations: Not Supported 00:33:02.995 UUID List: Not Supported 00:33:02.995 Multi-Domain Subsystem: Not Supported 00:33:02.995 Fixed Capacity Management: Not Supported 00:33:02.995 Variable Capacity Management: Not Supported 00:33:02.995 Delete Endurance Group: Not Supported 00:33:02.995 Delete NVM Set: Not Supported 00:33:02.995 Extended LBA Formats Supported: Not Supported 00:33:02.995 Flexible Data Placement Supported: Not Supported 00:33:02.995 00:33:02.995 Controller Memory Buffer Support 00:33:02.995 ================================ 00:33:02.995 Supported: No 00:33:02.995 00:33:02.995 Persistent Memory Region Support 00:33:02.995 ================================ 00:33:02.995 Supported: No 00:33:02.995 00:33:02.995 Admin Command Set Attributes 00:33:02.995 ============================ 00:33:02.995 Security Send/Receive: Not Supported 00:33:02.995 Format NVM: Not Supported 00:33:02.995 Firmware Activate/Download: Not Supported 00:33:02.995 Namespace Management: Not Supported 00:33:02.996 Device Self-Test: Not Supported 00:33:02.996 Directives: Not Supported 00:33:02.996 NVMe-MI: Not Supported 00:33:02.996 Virtualization Management: Not Supported 00:33:02.996 Doorbell Buffer Config: Not Supported 00:33:02.996 Get LBA Status Capability: Not Supported 00:33:02.996 Command & Feature Lockdown Capability: Not Supported 00:33:02.996 Abort Command Limit: 1 00:33:02.996 Async Event Request Limit: 4 00:33:02.996 Number of Firmware Slots: N/A 00:33:02.996 Firmware Slot 1 Read-Only: N/A 00:33:02.996 Firmware Activation Without Reset: N/A 00:33:02.996 Multiple Update Detection Support: N/A 00:33:02.996 Firmware Update Granularity: No Information Provided 00:33:02.996 Per-Namespace SMART Log: No 00:33:02.996 Asymmetric Namespace Access Log Page: Not Supported 00:33:02.996 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:02.996 Command Effects Log Page: Not Supported 00:33:02.996 Get Log Page Extended Data: Supported 00:33:02.996 Telemetry Log Pages: Not Supported 00:33:02.996 Persistent Event Log Pages: Not Supported 00:33:02.996 Supported Log Pages Log Page: May Support 00:33:02.996 Commands Supported & Effects Log Page: Not Supported 00:33:02.996 Feature Identifiers & Effects Log Page:May Support 00:33:02.996 NVMe-MI Commands & Effects Log Page: May Support 00:33:02.996 Data Area 4 for Telemetry Log: Not Supported 00:33:02.996 Error Log Page Entries Supported: 128 00:33:02.996 Keep Alive: Not Supported 00:33:02.996 00:33:02.996 NVM Command Set Attributes 00:33:02.996 ========================== 00:33:02.996 Submission Queue Entry Size 00:33:02.996 Max: 1 00:33:02.996 Min: 1 00:33:02.996 Completion Queue Entry Size 00:33:02.996 Max: 1 00:33:02.996 Min: 1 00:33:02.996 Number of Namespaces: 0 00:33:02.996 Compare Command: Not Supported 00:33:02.996 Write Uncorrectable Command: Not Supported 00:33:02.996 Dataset Management Command: Not Supported 00:33:02.996 Write Zeroes Command: Not Supported 00:33:02.996 Set Features Save Field: Not Supported 00:33:02.996 Reservations: Not Supported 00:33:02.996 Timestamp: Not Supported 00:33:02.996 Copy: Not Supported 00:33:02.996 Volatile Write Cache: Not Present 00:33:02.996 Atomic Write Unit (Normal): 1 00:33:02.996 Atomic Write Unit (PFail): 1 00:33:02.996 Atomic Compare & Write Unit: 1 00:33:02.996 Fused Compare & Write: Supported 00:33:02.996 Scatter-Gather List 00:33:02.996 SGL Command Set: Supported 00:33:02.996 SGL Keyed: Supported 00:33:02.996 SGL Bit Bucket Descriptor: Not Supported 00:33:02.996 SGL Metadata Pointer: Not Supported 00:33:02.996 Oversized SGL: Not Supported 00:33:02.996 SGL Metadata Address: Not Supported 00:33:02.996 SGL Offset: Supported 00:33:02.996 Transport SGL Data Block: Not Supported 00:33:02.996 Replay Protected Memory Block: Not Supported 00:33:02.996 00:33:02.996 Firmware Slot Information 00:33:02.996 ========================= 00:33:02.996 Active slot: 0 00:33:02.996 00:33:02.996 00:33:02.996 Error Log 00:33:02.996 ========= 00:33:02.996 00:33:02.996 Active Namespaces 00:33:02.996 ================= 00:33:02.996 Discovery Log Page 00:33:02.996 ================== 00:33:02.996 Generation Counter: 2 00:33:02.996 Number of Records: 2 00:33:02.996 Record Format: 0 00:33:02.996 00:33:02.996 Discovery Log Entry 0 00:33:02.996 ---------------------- 00:33:02.996 Transport Type: 3 (TCP) 00:33:02.996 Address Family: 1 (IPv4) 00:33:02.996 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:02.996 Entry Flags: 00:33:02.996 Duplicate Returned Information: 1 00:33:02.996 Explicit Persistent Connection Support for Discovery: 1 00:33:02.996 Transport Requirements: 00:33:02.996 Secure Channel: Not Required 00:33:02.996 Port ID: 0 (0x0000) 00:33:02.996 Controller ID: 65535 (0xffff) 00:33:02.996 Admin Max SQ Size: 128 00:33:02.996 Transport Service Identifier: 4420 00:33:02.996 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:02.996 Transport Address: 10.0.0.2 00:33:02.996 Discovery Log Entry 1 00:33:02.996 ---------------------- 00:33:02.996 Transport Type: 3 (TCP) 00:33:02.996 Address Family: 1 (IPv4) 00:33:02.996 Subsystem Type: 2 (NVM Subsystem) 00:33:02.996 Entry Flags: 00:33:02.996 Duplicate Returned Information: 0 00:33:02.996 Explicit Persistent Connection Support for Discovery: 0 00:33:02.996 Transport Requirements: 00:33:02.996 Secure Channel: Not Required 00:33:02.996 Port ID: 0 (0x0000) 00:33:02.996 Controller ID: 65535 (0xffff) 00:33:02.996 Admin Max SQ Size: 128 00:33:02.996 Transport Service Identifier: 4420 00:33:02.996 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:33:02.996 Transport Address: 10.0.0.2 [2024-11-28 13:03:33.035291] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:33:02.996 [2024-11-28 13:03:33.035305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.035312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.996 [2024-11-28 13:03:33.035318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0880) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.035323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.996 [2024-11-28 13:03:33.035328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0a00) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.035333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.996 [2024-11-28 13:03:33.035338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.035342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.996 [2024-11-28 13:03:33.035352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.996 [2024-11-28 13:03:33.035367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.996 [2024-11-28 13:03:33.035383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.996 [2024-11-28 13:03:33.035601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.996 [2024-11-28 13:03:33.035608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.996 [2024-11-28 13:03:33.035611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.035623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.996 [2024-11-28 13:03:33.035637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.996 [2024-11-28 13:03:33.035651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.996 [2024-11-28 13:03:33.035902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.996 [2024-11-28 13:03:33.035909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.996 [2024-11-28 13:03:33.035913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.035922] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:33:02.996 [2024-11-28 13:03:33.035927] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:33:02.996 [2024-11-28 13:03:33.035937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.035945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.996 [2024-11-28 13:03:33.035951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.996 [2024-11-28 13:03:33.035962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.996 [2024-11-28 13:03:33.036206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.996 [2024-11-28 13:03:33.036213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.996 [2024-11-28 13:03:33.036216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.036220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.996 [2024-11-28 13:03:33.036231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.036235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.996 [2024-11-28 13:03:33.036239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.996 [2024-11-28 13:03:33.036245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.996 [2024-11-28 13:03:33.036256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.996 [2024-11-28 13:03:33.036466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.996 [2024-11-28 13:03:33.036473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.036476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.036480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.036490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.036494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.036498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.036504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.036514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.036707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.036713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.036716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.036720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.036730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.036734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.036738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.036745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.036757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.037010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.037018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.037021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.037035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.037050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.037060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.037312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.037320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.037324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.037338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.037352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.037363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.037580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.037587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.037590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.037604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.037618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.037629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.037895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.037901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.037905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.037919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.037927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.037933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.037947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.038172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.038178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.038182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.038196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.038210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.038221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.038423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.038429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.038433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.038446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.038461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.038471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.038657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.038664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.038667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.038681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.038695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.038706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.997 [2024-11-28 13:03:33.038927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.997 [2024-11-28 13:03:33.038933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.997 [2024-11-28 13:03:33.038937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.997 [2024-11-28 13:03:33.038951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.997 [2024-11-28 13:03:33.038959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.997 [2024-11-28 13:03:33.038966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.997 [2024-11-28 13:03:33.038976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.998 [2024-11-28 13:03:33.043171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.998 [2024-11-28 13:03:33.043183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.998 [2024-11-28 13:03:33.043187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.998 [2024-11-28 13:03:33.043191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.998 [2024-11-28 13:03:33.043201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:02.998 [2024-11-28 13:03:33.043205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:02.998 [2024-11-28 13:03:33.043209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2154d10) 00:33:02.998 [2024-11-28 13:03:33.043216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.998 [2024-11-28 13:03:33.043228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 3, qid 0 00:33:02.998 [2024-11-28 13:03:33.043458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:02.998 [2024-11-28 13:03:33.043464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:02.998 [2024-11-28 13:03:33.043468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:02.998 [2024-11-28 13:03:33.043472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x2154d10 00:33:02.998 [2024-11-28 13:03:33.043480] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:33:02.998 00:33:02.998 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:33:02.998 [2024-11-28 13:03:33.093284] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:33:02.998 [2024-11-28 13:03:33.093327] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569804 ] 00:33:03.263 [2024-11-28 13:03:33.209776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.263 [2024-11-28 13:03:33.251647] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:33:03.263 [2024-11-28 13:03:33.251700] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:03.263 [2024-11-28 13:03:33.251706] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:03.263 [2024-11-28 13:03:33.251729] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:03.263 [2024-11-28 13:03:33.251738] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:03.263 [2024-11-28 13:03:33.252516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:33:03.263 [2024-11-28 13:03:33.252552] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f63d10 0 00:33:03.263 [2024-11-28 13:03:33.266176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:03.263 [2024-11-28 13:03:33.266191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:03.263 [2024-11-28 13:03:33.266196] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:03.263 [2024-11-28 13:03:33.266199] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:03.263 [2024-11-28 13:03:33.266235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.263 [2024-11-28 13:03:33.266242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.263 [2024-11-28 13:03:33.266250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.266264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:03.264 [2024-11-28 13:03:33.266286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.270173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.270184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.270188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.264 [2024-11-28 13:03:33.270202] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:03.264 [2024-11-28 13:03:33.270210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:33:03.264 [2024-11-28 13:03:33.270215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:33:03.264 [2024-11-28 13:03:33.270232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.270248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.264 [2024-11-28 13:03:33.270263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.270454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.270461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.270464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.264 [2024-11-28 13:03:33.270477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:33:03.264 [2024-11-28 13:03:33.270484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:33:03.264 [2024-11-28 13:03:33.270491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.270506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.264 [2024-11-28 13:03:33.270518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.270631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.270637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.270640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.264 [2024-11-28 13:03:33.270650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:33:03.264 [2024-11-28 13:03:33.270659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:33:03.264 [2024-11-28 13:03:33.270665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.270683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.264 [2024-11-28 13:03:33.270694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.270901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.270907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.270910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.264 [2024-11-28 13:03:33.270919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:03.264 [2024-11-28 13:03:33.270929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.270937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.270943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.264 [2024-11-28 13:03:33.270954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.271111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.271117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.271121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.264 [2024-11-28 13:03:33.271130] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:33:03.264 [2024-11-28 13:03:33.271135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:33:03.264 [2024-11-28 13:03:33.271143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:03.264 [2024-11-28 13:03:33.271248] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:33:03.264 [2024-11-28 13:03:33.271254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:03.264 [2024-11-28 13:03:33.271262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.271276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.264 [2024-11-28 13:03:33.271287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.271560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.271567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.271570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.264 [2024-11-28 13:03:33.271579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:03.264 [2024-11-28 13:03:33.271589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.264 [2024-11-28 13:03:33.271608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.264 [2024-11-28 13:03:33.271619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.264 [2024-11-28 13:03:33.271752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.264 [2024-11-28 13:03:33.271760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.264 [2024-11-28 13:03:33.271764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.264 [2024-11-28 13:03:33.271767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.265 [2024-11-28 13:03:33.271772] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:03.265 [2024-11-28 13:03:33.271777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.271785] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:33:03.265 [2024-11-28 13:03:33.271794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.271803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.271807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.271814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.265 [2024-11-28 13:03:33.271824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.265 [2024-11-28 13:03:33.272068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.265 [2024-11-28 13:03:33.272074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.265 [2024-11-28 13:03:33.272078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=4096, cccid=0 00:33:03.265 [2024-11-28 13:03:33.272087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdf700) on tqpair(0x1f63d10): expected_datao=0, payload_size=4096 00:33:03.265 [2024-11-28 13:03:33.272092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272100] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.265 [2024-11-28 13:03:33.272245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.265 [2024-11-28 13:03:33.272249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.265 [2024-11-28 13:03:33.272261] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:33:03.265 [2024-11-28 13:03:33.272267] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:33:03.265 [2024-11-28 13:03:33.272271] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:33:03.265 [2024-11-28 13:03:33.272276] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:33:03.265 [2024-11-28 13:03:33.272281] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:33:03.265 [2024-11-28 13:03:33.272286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:03.265 [2024-11-28 13:03:33.272331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.265 [2024-11-28 13:03:33.272517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.265 [2024-11-28 13:03:33.272523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.265 [2024-11-28 13:03:33.272527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.265 [2024-11-28 13:03:33.272538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.265 [2024-11-28 13:03:33.272559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.265 [2024-11-28 13:03:33.272578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.265 [2024-11-28 13:03:33.272598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.265 [2024-11-28 13:03:33.272616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.265 [2024-11-28 13:03:33.272656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf700, cid 0, qid 0 00:33:03.265 [2024-11-28 13:03:33.272661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf880, cid 1, qid 0 00:33:03.265 [2024-11-28 13:03:33.272668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfa00, cid 2, qid 0 00:33:03.265 [2024-11-28 13:03:33.272673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfb80, cid 3, qid 0 00:33:03.265 [2024-11-28 13:03:33.272678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.265 [2024-11-28 13:03:33.272834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.265 [2024-11-28 13:03:33.272840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.265 [2024-11-28 13:03:33.272844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.265 [2024-11-28 13:03:33.272853] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:33:03.265 [2024-11-28 13:03:33.272858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:03.265 [2024-11-28 13:03:33.272882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.265 [2024-11-28 13:03:33.272890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.265 [2024-11-28 13:03:33.272897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:03.265 [2024-11-28 13:03:33.272907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.265 [2024-11-28 13:03:33.273018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.265 [2024-11-28 13:03:33.273024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.265 [2024-11-28 13:03:33.273028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.273031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.266 [2024-11-28 13:03:33.273100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.273109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.273117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.273121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.266 [2024-11-28 13:03:33.273127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.266 [2024-11-28 13:03:33.273138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.266 [2024-11-28 13:03:33.277170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.266 [2024-11-28 13:03:33.277179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.266 [2024-11-28 13:03:33.277183] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=4096, cccid=4 00:33:03.266 [2024-11-28 13:03:33.277191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfd00) on tqpair(0x1f63d10): expected_datao=0, payload_size=4096 00:33:03.266 [2024-11-28 13:03:33.277196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277206] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.266 [2024-11-28 13:03:33.277221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.266 [2024-11-28 13:03:33.277225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.266 [2024-11-28 13:03:33.277241] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:33:03.266 [2024-11-28 13:03:33.277251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.277262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.277268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.266 [2024-11-28 13:03:33.277279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.266 [2024-11-28 13:03:33.277291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.266 [2024-11-28 13:03:33.277511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.266 [2024-11-28 13:03:33.277518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.266 [2024-11-28 13:03:33.277521] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277525] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=4096, cccid=4 00:33:03.266 [2024-11-28 13:03:33.277529] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfd00) on tqpair(0x1f63d10): expected_datao=0, payload_size=4096 00:33:03.266 [2024-11-28 13:03:33.277534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277540] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277544] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.266 [2024-11-28 13:03:33.277738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.266 [2024-11-28 13:03:33.277742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.266 [2024-11-28 13:03:33.277757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.277767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.277774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.277778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.266 [2024-11-28 13:03:33.277785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.266 [2024-11-28 13:03:33.277795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.266 [2024-11-28 13:03:33.278050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.266 [2024-11-28 13:03:33.278057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.266 [2024-11-28 13:03:33.278062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=4096, cccid=4 00:33:03.266 [2024-11-28 13:03:33.278073] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfd00) on tqpair(0x1f63d10): expected_datao=0, payload_size=4096 00:33:03.266 [2024-11-28 13:03:33.278078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278088] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.266 [2024-11-28 13:03:33.278273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.266 [2024-11-28 13:03:33.278276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.266 [2024-11-28 13:03:33.278291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278332] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:33:03.266 [2024-11-28 13:03:33.278337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:33:03.266 [2024-11-28 13:03:33.278342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:33:03.266 [2024-11-28 13:03:33.278359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.266 [2024-11-28 13:03:33.278369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.266 [2024-11-28 13:03:33.278377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.266 [2024-11-28 13:03:33.278384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f63d10) 00:33:03.266 [2024-11-28 13:03:33.278391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.266 [2024-11-28 13:03:33.278404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.266 [2024-11-28 13:03:33.278410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe80, cid 5, qid 0 00:33:03.266 [2024-11-28 13:03:33.278624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.267 [2024-11-28 13:03:33.278632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.267 [2024-11-28 13:03:33.278635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.278639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.267 [2024-11-28 13:03:33.278646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.267 [2024-11-28 13:03:33.278651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.267 [2024-11-28 13:03:33.278657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.278661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe80) on tqpair=0x1f63d10 00:33:03.267 [2024-11-28 13:03:33.278670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.278674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.278681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.278691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe80, cid 5, qid 0 00:33:03.267 [2024-11-28 13:03:33.278843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.267 [2024-11-28 13:03:33.278849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.267 [2024-11-28 13:03:33.278852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.278856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe80) on tqpair=0x1f63d10 00:33:03.267 [2024-11-28 13:03:33.278866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.278869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.278876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.278886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe80, cid 5, qid 0 00:33:03.267 [2024-11-28 13:03:33.279070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.267 [2024-11-28 13:03:33.279076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.267 [2024-11-28 13:03:33.279079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe80) on tqpair=0x1f63d10 00:33:03.267 [2024-11-28 13:03:33.279093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.279103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.279113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe80, cid 5, qid 0 00:33:03.267 [2024-11-28 13:03:33.279307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.267 [2024-11-28 13:03:33.279314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.267 [2024-11-28 13:03:33.279317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe80) on tqpair=0x1f63d10 00:33:03.267 [2024-11-28 13:03:33.279338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.279349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.279356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.279366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.279373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.279385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.279393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f63d10) 00:33:03.267 [2024-11-28 13:03:33.279402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.267 [2024-11-28 13:03:33.279414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe80, cid 5, qid 0 00:33:03.267 [2024-11-28 13:03:33.279419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfd00, cid 4, qid 0 00:33:03.267 [2024-11-28 13:03:33.279424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe0000, cid 6, qid 0 00:33:03.267 [2024-11-28 13:03:33.279429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe0180, cid 7, qid 0 00:33:03.267 [2024-11-28 13:03:33.279661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.267 [2024-11-28 13:03:33.279668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.267 [2024-11-28 13:03:33.279671] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279675] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=8192, cccid=5 00:33:03.267 [2024-11-28 13:03:33.279679] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfe80) on tqpair(0x1f63d10): expected_datao=0, payload_size=8192 00:33:03.267 [2024-11-28 13:03:33.279684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279773] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279777] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.267 [2024-11-28 13:03:33.279789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.267 [2024-11-28 13:03:33.279792] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279796] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=512, cccid=4 00:33:03.267 [2024-11-28 13:03:33.279800] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfd00) on tqpair(0x1f63d10): expected_datao=0, payload_size=512 00:33:03.267 [2024-11-28 13:03:33.279804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279811] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279814] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.267 [2024-11-28 13:03:33.279826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.267 [2024-11-28 13:03:33.279829] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279833] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=512, cccid=6 00:33:03.267 [2024-11-28 13:03:33.279837] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe0000) on tqpair(0x1f63d10): expected_datao=0, payload_size=512 00:33:03.267 [2024-11-28 13:03:33.279841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279848] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279851] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:03.267 [2024-11-28 13:03:33.279863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:03.267 [2024-11-28 13:03:33.279866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:03.267 [2024-11-28 13:03:33.279870] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f63d10): datao=0, datal=4096, cccid=7 00:33:03.267 [2024-11-28 13:03:33.279876] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe0180) on tqpair(0x1f63d10): expected_datao=0, payload_size=4096 00:33:03.267 [2024-11-28 13:03:33.279881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.279898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.279902] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.280069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.268 [2024-11-28 13:03:33.280075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.268 [2024-11-28 13:03:33.280078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.280082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe80) on tqpair=0x1f63d10 00:33:03.268 [2024-11-28 13:03:33.280094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.268 [2024-11-28 13:03:33.280100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.268 [2024-11-28 13:03:33.280104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.280107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfd00) on tqpair=0x1f63d10 00:33:03.268 [2024-11-28 13:03:33.280118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.268 [2024-11-28 13:03:33.280124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.268 [2024-11-28 13:03:33.280127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.280131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fe0000) on tqpair=0x1f63d10 00:33:03.268 [2024-11-28 13:03:33.280138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.268 [2024-11-28 13:03:33.280144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.268 [2024-11-28 13:03:33.280147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.268 [2024-11-28 13:03:33.280151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fe0180) on tqpair=0x1f63d10 00:33:03.268 ===================================================== 00:33:03.268 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:03.268 ===================================================== 00:33:03.268 Controller Capabilities/Features 00:33:03.268 ================================ 00:33:03.268 Vendor ID: 8086 00:33:03.268 Subsystem Vendor ID: 8086 00:33:03.268 Serial Number: SPDK00000000000001 00:33:03.268 Model Number: SPDK bdev Controller 00:33:03.268 Firmware Version: 25.01 00:33:03.268 Recommended Arb Burst: 6 00:33:03.268 IEEE OUI Identifier: e4 d2 5c 00:33:03.268 Multi-path I/O 00:33:03.268 May have multiple subsystem ports: Yes 00:33:03.268 May have multiple controllers: Yes 00:33:03.268 Associated with SR-IOV VF: No 00:33:03.268 Max Data Transfer Size: 131072 00:33:03.268 Max Number of Namespaces: 32 00:33:03.268 Max Number of I/O Queues: 127 00:33:03.268 NVMe Specification Version (VS): 1.3 00:33:03.268 NVMe Specification Version (Identify): 1.3 00:33:03.268 Maximum Queue Entries: 128 00:33:03.268 Contiguous Queues Required: Yes 00:33:03.268 Arbitration Mechanisms Supported 00:33:03.268 Weighted Round Robin: Not Supported 00:33:03.268 Vendor Specific: Not Supported 00:33:03.268 Reset Timeout: 15000 ms 00:33:03.268 Doorbell Stride: 4 bytes 00:33:03.268 NVM Subsystem Reset: Not Supported 00:33:03.268 Command Sets Supported 00:33:03.268 NVM Command Set: Supported 00:33:03.268 Boot Partition: Not Supported 00:33:03.268 Memory Page Size Minimum: 4096 bytes 00:33:03.268 Memory Page Size Maximum: 4096 bytes 00:33:03.268 Persistent Memory Region: Not Supported 00:33:03.268 Optional Asynchronous Events Supported 00:33:03.268 Namespace Attribute Notices: Supported 00:33:03.268 Firmware Activation Notices: Not Supported 00:33:03.268 ANA Change Notices: Not Supported 00:33:03.268 PLE Aggregate Log Change Notices: Not Supported 00:33:03.268 LBA Status Info Alert Notices: Not Supported 00:33:03.268 EGE Aggregate Log Change Notices: Not Supported 00:33:03.268 Normal NVM Subsystem Shutdown event: Not Supported 00:33:03.268 Zone Descriptor Change Notices: Not Supported 00:33:03.268 Discovery Log Change Notices: Not Supported 00:33:03.268 Controller Attributes 00:33:03.268 128-bit Host Identifier: Supported 00:33:03.268 Non-Operational Permissive Mode: Not Supported 00:33:03.268 NVM Sets: Not Supported 00:33:03.268 Read Recovery Levels: Not Supported 00:33:03.268 Endurance Groups: Not Supported 00:33:03.268 Predictable Latency Mode: Not Supported 00:33:03.268 Traffic Based Keep ALive: Not Supported 00:33:03.268 Namespace Granularity: Not Supported 00:33:03.268 SQ Associations: Not Supported 00:33:03.268 UUID List: Not Supported 00:33:03.268 Multi-Domain Subsystem: Not Supported 00:33:03.268 Fixed Capacity Management: Not Supported 00:33:03.268 Variable Capacity Management: Not Supported 00:33:03.268 Delete Endurance Group: Not Supported 00:33:03.268 Delete NVM Set: Not Supported 00:33:03.268 Extended LBA Formats Supported: Not Supported 00:33:03.268 Flexible Data Placement Supported: Not Supported 00:33:03.268 00:33:03.268 Controller Memory Buffer Support 00:33:03.268 ================================ 00:33:03.268 Supported: No 00:33:03.268 00:33:03.268 Persistent Memory Region Support 00:33:03.268 ================================ 00:33:03.268 Supported: No 00:33:03.268 00:33:03.268 Admin Command Set Attributes 00:33:03.268 ============================ 00:33:03.268 Security Send/Receive: Not Supported 00:33:03.268 Format NVM: Not Supported 00:33:03.268 Firmware Activate/Download: Not Supported 00:33:03.268 Namespace Management: Not Supported 00:33:03.268 Device Self-Test: Not Supported 00:33:03.268 Directives: Not Supported 00:33:03.268 NVMe-MI: Not Supported 00:33:03.268 Virtualization Management: Not Supported 00:33:03.268 Doorbell Buffer Config: Not Supported 00:33:03.268 Get LBA Status Capability: Not Supported 00:33:03.268 Command & Feature Lockdown Capability: Not Supported 00:33:03.268 Abort Command Limit: 4 00:33:03.268 Async Event Request Limit: 4 00:33:03.268 Number of Firmware Slots: N/A 00:33:03.268 Firmware Slot 1 Read-Only: N/A 00:33:03.268 Firmware Activation Without Reset: N/A 00:33:03.268 Multiple Update Detection Support: N/A 00:33:03.268 Firmware Update Granularity: No Information Provided 00:33:03.268 Per-Namespace SMART Log: No 00:33:03.268 Asymmetric Namespace Access Log Page: Not Supported 00:33:03.268 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:33:03.268 Command Effects Log Page: Supported 00:33:03.268 Get Log Page Extended Data: Supported 00:33:03.268 Telemetry Log Pages: Not Supported 00:33:03.268 Persistent Event Log Pages: Not Supported 00:33:03.268 Supported Log Pages Log Page: May Support 00:33:03.268 Commands Supported & Effects Log Page: Not Supported 00:33:03.268 Feature Identifiers & Effects Log Page:May Support 00:33:03.268 NVMe-MI Commands & Effects Log Page: May Support 00:33:03.268 Data Area 4 for Telemetry Log: Not Supported 00:33:03.268 Error Log Page Entries Supported: 128 00:33:03.268 Keep Alive: Supported 00:33:03.268 Keep Alive Granularity: 10000 ms 00:33:03.269 00:33:03.269 NVM Command Set Attributes 00:33:03.269 ========================== 00:33:03.269 Submission Queue Entry Size 00:33:03.269 Max: 64 00:33:03.269 Min: 64 00:33:03.269 Completion Queue Entry Size 00:33:03.269 Max: 16 00:33:03.269 Min: 16 00:33:03.269 Number of Namespaces: 32 00:33:03.269 Compare Command: Supported 00:33:03.269 Write Uncorrectable Command: Not Supported 00:33:03.269 Dataset Management Command: Supported 00:33:03.269 Write Zeroes Command: Supported 00:33:03.269 Set Features Save Field: Not Supported 00:33:03.269 Reservations: Supported 00:33:03.269 Timestamp: Not Supported 00:33:03.269 Copy: Supported 00:33:03.269 Volatile Write Cache: Present 00:33:03.269 Atomic Write Unit (Normal): 1 00:33:03.269 Atomic Write Unit (PFail): 1 00:33:03.269 Atomic Compare & Write Unit: 1 00:33:03.269 Fused Compare & Write: Supported 00:33:03.269 Scatter-Gather List 00:33:03.269 SGL Command Set: Supported 00:33:03.269 SGL Keyed: Supported 00:33:03.269 SGL Bit Bucket Descriptor: Not Supported 00:33:03.269 SGL Metadata Pointer: Not Supported 00:33:03.269 Oversized SGL: Not Supported 00:33:03.269 SGL Metadata Address: Not Supported 00:33:03.269 SGL Offset: Supported 00:33:03.269 Transport SGL Data Block: Not Supported 00:33:03.269 Replay Protected Memory Block: Not Supported 00:33:03.269 00:33:03.269 Firmware Slot Information 00:33:03.269 ========================= 00:33:03.269 Active slot: 1 00:33:03.269 Slot 1 Firmware Revision: 25.01 00:33:03.269 00:33:03.269 00:33:03.269 Commands Supported and Effects 00:33:03.269 ============================== 00:33:03.269 Admin Commands 00:33:03.269 -------------- 00:33:03.269 Get Log Page (02h): Supported 00:33:03.269 Identify (06h): Supported 00:33:03.269 Abort (08h): Supported 00:33:03.269 Set Features (09h): Supported 00:33:03.269 Get Features (0Ah): Supported 00:33:03.269 Asynchronous Event Request (0Ch): Supported 00:33:03.269 Keep Alive (18h): Supported 00:33:03.269 I/O Commands 00:33:03.269 ------------ 00:33:03.269 Flush (00h): Supported LBA-Change 00:33:03.269 Write (01h): Supported LBA-Change 00:33:03.269 Read (02h): Supported 00:33:03.269 Compare (05h): Supported 00:33:03.269 Write Zeroes (08h): Supported LBA-Change 00:33:03.269 Dataset Management (09h): Supported LBA-Change 00:33:03.269 Copy (19h): Supported LBA-Change 00:33:03.269 00:33:03.269 Error Log 00:33:03.269 ========= 00:33:03.269 00:33:03.269 Arbitration 00:33:03.269 =========== 00:33:03.269 Arbitration Burst: 1 00:33:03.269 00:33:03.269 Power Management 00:33:03.269 ================ 00:33:03.269 Number of Power States: 1 00:33:03.269 Current Power State: Power State #0 00:33:03.269 Power State #0: 00:33:03.269 Max Power: 0.00 W 00:33:03.269 Non-Operational State: Operational 00:33:03.269 Entry Latency: Not Reported 00:33:03.269 Exit Latency: Not Reported 00:33:03.269 Relative Read Throughput: 0 00:33:03.269 Relative Read Latency: 0 00:33:03.269 Relative Write Throughput: 0 00:33:03.269 Relative Write Latency: 0 00:33:03.269 Idle Power: Not Reported 00:33:03.269 Active Power: Not Reported 00:33:03.269 Non-Operational Permissive Mode: Not Supported 00:33:03.269 00:33:03.269 Health Information 00:33:03.269 ================== 00:33:03.269 Critical Warnings: 00:33:03.269 Available Spare Space: OK 00:33:03.269 Temperature: OK 00:33:03.269 Device Reliability: OK 00:33:03.269 Read Only: No 00:33:03.269 Volatile Memory Backup: OK 00:33:03.269 Current Temperature: 0 Kelvin (-273 Celsius) 00:33:03.269 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:03.269 Available Spare: 0% 00:33:03.269 Available Spare Threshold: 0% 00:33:03.269 Life Percentage Used:[2024-11-28 13:03:33.280258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.269 [2024-11-28 13:03:33.280263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f63d10) 00:33:03.269 [2024-11-28 13:03:33.280270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.269 [2024-11-28 13:03:33.280282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe0180, cid 7, qid 0 00:33:03.269 [2024-11-28 13:03:33.280486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.269 [2024-11-28 13:03:33.280492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.269 [2024-11-28 13:03:33.280496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.269 [2024-11-28 13:03:33.280500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fe0180) on tqpair=0x1f63d10 00:33:03.269 [2024-11-28 13:03:33.280536] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:33:03.269 [2024-11-28 13:03:33.280547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf700) on tqpair=0x1f63d10 00:33:03.269 [2024-11-28 13:03:33.280555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.269 [2024-11-28 13:03:33.280560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf880) on tqpair=0x1f63d10 00:33:03.269 [2024-11-28 13:03:33.280565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.269 [2024-11-28 13:03:33.280570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfa00) on tqpair=0x1f63d10 00:33:03.269 [2024-11-28 13:03:33.280575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.270 [2024-11-28 13:03:33.280580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfb80) on tqpair=0x1f63d10 00:33:03.270 [2024-11-28 13:03:33.280587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.270 [2024-11-28 13:03:33.280595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.280599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.280603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f63d10) 00:33:03.270 [2024-11-28 13:03:33.280610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.270 [2024-11-28 13:03:33.280622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfb80, cid 3, qid 0 00:33:03.270 [2024-11-28 13:03:33.280728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.270 [2024-11-28 13:03:33.280736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.270 [2024-11-28 13:03:33.280740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.280744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfb80) on tqpair=0x1f63d10 00:33:03.270 [2024-11-28 13:03:33.280751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.280754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.280758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f63d10) 00:33:03.270 [2024-11-28 13:03:33.280765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.270 [2024-11-28 13:03:33.280778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfb80, cid 3, qid 0 00:33:03.270 [2024-11-28 13:03:33.280975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.270 [2024-11-28 13:03:33.280981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.270 [2024-11-28 13:03:33.280984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.280988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfb80) on tqpair=0x1f63d10 00:33:03.270 [2024-11-28 13:03:33.280993] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:33:03.270 [2024-11-28 13:03:33.280998] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:33:03.270 [2024-11-28 13:03:33.281008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.281012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.281015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f63d10) 00:33:03.270 [2024-11-28 13:03:33.281022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.270 [2024-11-28 13:03:33.281032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfb80, cid 3, qid 0 00:33:03.270 [2024-11-28 13:03:33.285170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.270 [2024-11-28 13:03:33.285180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.270 [2024-11-28 13:03:33.285183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.285187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfb80) on tqpair=0x1f63d10 00:33:03.270 [2024-11-28 13:03:33.285199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.285203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.285207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f63d10) 00:33:03.270 [2024-11-28 13:03:33.285213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.270 [2024-11-28 13:03:33.285225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfb80, cid 3, qid 0 00:33:03.270 [2024-11-28 13:03:33.285424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:03.270 [2024-11-28 13:03:33.285434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:03.270 [2024-11-28 13:03:33.285437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:03.270 [2024-11-28 13:03:33.285441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfb80) on tqpair=0x1f63d10 00:33:03.270 [2024-11-28 13:03:33.285449] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:33:03.270 0% 00:33:03.270 Data Units Read: 0 00:33:03.270 Data Units Written: 0 00:33:03.270 Host Read Commands: 0 00:33:03.270 Host Write Commands: 0 00:33:03.270 Controller Busy Time: 0 minutes 00:33:03.270 Power Cycles: 0 00:33:03.270 Power On Hours: 0 hours 00:33:03.270 Unsafe Shutdowns: 0 00:33:03.270 Unrecoverable Media Errors: 0 00:33:03.270 Lifetime Error Log Entries: 0 00:33:03.270 Warning Temperature Time: 0 minutes 00:33:03.270 Critical Temperature Time: 0 minutes 00:33:03.270 00:33:03.270 Number of Queues 00:33:03.270 ================ 00:33:03.270 Number of I/O Submission Queues: 127 00:33:03.270 Number of I/O Completion Queues: 127 00:33:03.270 00:33:03.270 Active Namespaces 00:33:03.270 ================= 00:33:03.270 Namespace ID:1 00:33:03.270 Error Recovery Timeout: Unlimited 00:33:03.270 Command Set Identifier: NVM (00h) 00:33:03.270 Deallocate: Supported 00:33:03.270 Deallocated/Unwritten Error: Not Supported 00:33:03.270 Deallocated Read Value: Unknown 00:33:03.270 Deallocate in Write Zeroes: Not Supported 00:33:03.270 Deallocated Guard Field: 0xFFFF 00:33:03.270 Flush: Supported 00:33:03.270 Reservation: Supported 00:33:03.270 Namespace Sharing Capabilities: Multiple Controllers 00:33:03.270 Size (in LBAs): 131072 (0GiB) 00:33:03.270 Capacity (in LBAs): 131072 (0GiB) 00:33:03.270 Utilization (in LBAs): 131072 (0GiB) 00:33:03.270 NGUID: ABCDEF0123456789ABCDEF0123456789 00:33:03.270 EUI64: ABCDEF0123456789 00:33:03.270 UUID: e394d528-4f23-4693-b2e4-0a938d3ffd7e 00:33:03.270 Thin Provisioning: Not Supported 00:33:03.270 Per-NS Atomic Units: Yes 00:33:03.270 Atomic Boundary Size (Normal): 0 00:33:03.270 Atomic Boundary Size (PFail): 0 00:33:03.270 Atomic Boundary Offset: 0 00:33:03.270 Maximum Single Source Range Length: 65535 00:33:03.270 Maximum Copy Length: 65535 00:33:03.270 Maximum Source Range Count: 1 00:33:03.270 NGUID/EUI64 Never Reused: No 00:33:03.270 Namespace Write Protected: No 00:33:03.270 Number of LBA Formats: 1 00:33:03.270 Current LBA Format: LBA Format #00 00:33:03.270 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:03.270 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:33:03.270 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:03.271 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:33:03.271 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.271 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:33:03.271 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.271 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.271 rmmod nvme_tcp 00:33:03.271 rmmod nvme_fabrics 00:33:03.271 rmmod nvme_keyring 00:33:03.271 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3569631 ']' 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3569631 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3569631 ']' 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3569631 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3569631 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3569631' 00:33:03.531 killing process with pid 3569631 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3569631 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3569631 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.531 13:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:06.074 00:33:06.074 real 0m11.959s 00:33:06.074 user 0m9.095s 00:33:06.074 sys 0m6.230s 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:06.074 ************************************ 00:33:06.074 END TEST nvmf_identify 00:33:06.074 ************************************ 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.074 ************************************ 00:33:06.074 START TEST nvmf_perf 00:33:06.074 ************************************ 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:06.074 * Looking for test storage... 00:33:06.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:33:06.074 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:33:06.075 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.075 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.075 13:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.075 --rc genhtml_branch_coverage=1 00:33:06.075 --rc genhtml_function_coverage=1 00:33:06.075 --rc genhtml_legend=1 00:33:06.075 --rc geninfo_all_blocks=1 00:33:06.075 --rc geninfo_unexecuted_blocks=1 00:33:06.075 00:33:06.075 ' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.075 --rc genhtml_branch_coverage=1 00:33:06.075 --rc genhtml_function_coverage=1 00:33:06.075 --rc genhtml_legend=1 00:33:06.075 --rc geninfo_all_blocks=1 00:33:06.075 --rc geninfo_unexecuted_blocks=1 00:33:06.075 00:33:06.075 ' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.075 --rc genhtml_branch_coverage=1 00:33:06.075 --rc genhtml_function_coverage=1 00:33:06.075 --rc genhtml_legend=1 00:33:06.075 --rc geninfo_all_blocks=1 00:33:06.075 --rc geninfo_unexecuted_blocks=1 00:33:06.075 00:33:06.075 ' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:06.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.075 --rc genhtml_branch_coverage=1 00:33:06.075 --rc genhtml_function_coverage=1 00:33:06.075 --rc genhtml_legend=1 00:33:06.075 --rc geninfo_all_blocks=1 00:33:06.075 --rc geninfo_unexecuted_blocks=1 00:33:06.075 00:33:06.075 ' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:06.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.075 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:06.076 13:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:14.213 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:14.213 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.213 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:14.213 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:14.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:33:14.214 00:33:14.214 --- 10.0.0.2 ping statistics --- 00:33:14.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.214 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:33:14.214 00:33:14.214 --- 10.0.0.1 ping statistics --- 00:33:14.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.214 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3574071 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3574071 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3574071 ']' 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.214 13:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:14.214 [2024-11-28 13:03:43.665457] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:33:14.214 [2024-11-28 13:03:43.665525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.214 [2024-11-28 13:03:43.810481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:14.214 [2024-11-28 13:03:43.869821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.214 [2024-11-28 13:03:43.898349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.214 [2024-11-28 13:03:43.898391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.214 [2024-11-28 13:03:43.898400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.214 [2024-11-28 13:03:43.898408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.214 [2024-11-28 13:03:43.898414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.214 [2024-11-28 13:03:43.900225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.214 [2024-11-28 13:03:43.900447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.214 [2024-11-28 13:03:43.900447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.214 [2024-11-28 13:03:43.900288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:14.475 13:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:15.050 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:33:15.050 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:33:15.314 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:33:15.314 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:15.575 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:33:15.576 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:33:15.576 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:33:15.576 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:33:15.576 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:33:15.576 [2024-11-28 13:03:45.656387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.576 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:15.837 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:15.837 13:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.099 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:16.099 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:16.360 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.360 [2024-11-28 13:03:46.442257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.360 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:16.622 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:33:16.622 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:33:16.622 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:33:16.622 13:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:33:18.005 Initializing NVMe Controllers 00:33:18.005 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:33:18.005 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:33:18.005 Initialization complete. Launching workers. 00:33:18.005 ======================================================== 00:33:18.005 Latency(us) 00:33:18.005 Device Information : IOPS MiB/s Average min max 00:33:18.005 PCIE (0000:65:00.0) NSID 1 from core 0: 78147.59 305.26 408.93 13.53 4965.48 00:33:18.005 ======================================================== 00:33:18.005 Total : 78147.59 305.26 408.93 13.53 4965.48 00:33:18.005 00:33:18.005 13:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:19.392 Initializing NVMe Controllers 00:33:19.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:19.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:19.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:19.392 Initialization complete. Launching workers. 00:33:19.392 ======================================================== 00:33:19.392 Latency(us) 00:33:19.392 Device Information : IOPS MiB/s Average min max 00:33:19.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.38 10856.48 196.95 46351.37 00:33:19.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 67.00 0.26 15163.32 7979.28 48002.72 00:33:19.392 ======================================================== 00:33:19.392 Total : 163.00 0.64 12626.78 196.95 48002.72 00:33:19.392 00:33:19.392 13:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:21.310 Initializing NVMe Controllers 00:33:21.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:21.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:21.310 Initialization complete. Launching workers. 00:33:21.310 ======================================================== 00:33:21.310 Latency(us) 00:33:21.310 Device Information : IOPS MiB/s Average min max 00:33:21.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12319.85 48.12 2599.09 406.34 6439.67 00:33:21.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3847.95 15.03 8392.98 5519.06 16061.87 00:33:21.310 ======================================================== 00:33:21.310 Total : 16167.81 63.16 3978.04 406.34 16061.87 00:33:21.310 00:33:21.310 13:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:33:21.310 13:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:33:21.310 13:03:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:23.860 Initializing NVMe Controllers 00:33:23.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:23.860 Controller IO queue size 128, less than required. 00:33:23.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:23.860 Controller IO queue size 128, less than required. 00:33:23.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:23.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:23.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:23.860 Initialization complete. Launching workers. 00:33:23.860 ======================================================== 00:33:23.860 Latency(us) 00:33:23.860 Device Information : IOPS MiB/s Average min max 00:33:23.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1929.78 482.45 67249.05 47475.39 121631.44 00:33:23.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.62 151.90 219305.72 56218.27 331114.87 00:33:23.860 ======================================================== 00:33:23.860 Total : 2537.40 634.35 103661.20 47475.39 331114.87 00:33:23.860 00:33:23.860 13:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:33:23.860 No valid NVMe controllers or AIO or URING devices found 00:33:23.860 Initializing NVMe Controllers 00:33:23.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:23.860 Controller IO queue size 128, less than required. 00:33:23.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:23.861 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:33:23.861 Controller IO queue size 128, less than required. 00:33:23.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:23.861 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:33:23.861 WARNING: Some requested NVMe devices were skipped 00:33:23.861 13:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:33:26.410 Initializing NVMe Controllers 00:33:26.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.410 Controller IO queue size 128, less than required. 00:33:26.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:26.410 Controller IO queue size 128, less than required. 00:33:26.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:26.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:26.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:26.410 Initialization complete. Launching workers. 00:33:26.410 00:33:26.410 ==================== 00:33:26.410 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:33:26.410 TCP transport: 00:33:26.410 polls: 36088 00:33:26.410 idle_polls: 21136 00:33:26.410 sock_completions: 14952 00:33:26.410 nvme_completions: 7569 00:33:26.410 submitted_requests: 11302 00:33:26.410 queued_requests: 1 00:33:26.410 00:33:26.410 ==================== 00:33:26.410 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:33:26.410 TCP transport: 00:33:26.410 polls: 39508 00:33:26.410 idle_polls: 24528 00:33:26.410 sock_completions: 14980 00:33:26.410 nvme_completions: 7299 00:33:26.410 submitted_requests: 10954 00:33:26.410 queued_requests: 1 00:33:26.410 ======================================================== 00:33:26.410 Latency(us) 00:33:26.410 Device Information : IOPS MiB/s Average min max 00:33:26.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1890.12 472.53 68898.86 30673.80 136007.48 00:33:26.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1822.69 455.67 70897.35 25067.61 108448.51 00:33:26.410 ======================================================== 00:33:26.410 Total : 3712.81 928.20 69879.96 25067.61 136007.48 00:33:26.410 00:33:26.410 13:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:33:26.672 13:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.672 13:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:33:26.672 13:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:33:26.672 13:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=fa447beb-9ca8-4eb6-8b51-320da61d3b62 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb fa447beb-9ca8-4eb6-8b51-320da61d3b62 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=fa447beb-9ca8-4eb6-8b51-320da61d3b62 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:28.058 { 00:33:28.058 "uuid": "fa447beb-9ca8-4eb6-8b51-320da61d3b62", 00:33:28.058 "name": "lvs_0", 00:33:28.058 "base_bdev": "Nvme0n1", 00:33:28.058 "total_data_clusters": 457407, 00:33:28.058 "free_clusters": 457407, 00:33:28.058 "block_size": 512, 00:33:28.058 "cluster_size": 4194304 00:33:28.058 } 00:33:28.058 ]' 00:33:28.058 13:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fa447beb-9ca8-4eb6-8b51-320da61d3b62") .free_clusters' 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=457407 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fa447beb-9ca8-4eb6-8b51-320da61d3b62") .cluster_size' 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1829628 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1829628 00:33:28.058 1829628 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:33:28.058 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa447beb-9ca8-4eb6-8b51-320da61d3b62 lbd_0 20480 00:33:28.318 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=cad13cd5-f77e-4db7-853e-946a6f73ddbc 00:33:28.318 13:03:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore cad13cd5-f77e-4db7-853e-946a6f73ddbc lvs_n_0 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=13ce1cc4-8886-449b-97a7-262ce4d4814e 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 13ce1cc4-8886-449b-97a7-262ce4d4814e 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=13ce1cc4-8886-449b-97a7-262ce4d4814e 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:33:30.234 13:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:30.234 { 00:33:30.234 "uuid": "fa447beb-9ca8-4eb6-8b51-320da61d3b62", 00:33:30.234 "name": "lvs_0", 00:33:30.234 "base_bdev": "Nvme0n1", 00:33:30.234 "total_data_clusters": 457407, 00:33:30.234 "free_clusters": 452287, 00:33:30.234 "block_size": 512, 00:33:30.234 "cluster_size": 4194304 00:33:30.234 }, 00:33:30.234 { 00:33:30.234 "uuid": "13ce1cc4-8886-449b-97a7-262ce4d4814e", 00:33:30.234 "name": "lvs_n_0", 00:33:30.234 "base_bdev": "cad13cd5-f77e-4db7-853e-946a6f73ddbc", 00:33:30.234 "total_data_clusters": 5114, 00:33:30.234 "free_clusters": 5114, 00:33:30.234 "block_size": 512, 00:33:30.234 "cluster_size": 4194304 00:33:30.234 } 00:33:30.234 ]' 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="13ce1cc4-8886-449b-97a7-262ce4d4814e") .free_clusters' 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="13ce1cc4-8886-449b-97a7-262ce4d4814e") .cluster_size' 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:33:30.234 20456 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13ce1cc4-8886-449b-97a7-262ce4d4814e lbd_nest_0 20456 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=32dcc7d1-6872-4b64-9775-5bf99cbd0dd3 00:33:30.234 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:30.496 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:33:30.496 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 32dcc7d1-6872-4b64-9775-5bf99cbd0dd3 00:33:30.758 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.758 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:33:30.758 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:33:30.758 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:30.758 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:30.758 13:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:42.991 Initializing NVMe Controllers 00:33:42.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:42.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:42.991 Initialization complete. Launching workers. 00:33:42.991 ======================================================== 00:33:42.991 Latency(us) 00:33:42.991 Device Information : IOPS MiB/s Average min max 00:33:42.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 42.68 0.02 23526.87 114.15 46001.87 00:33:42.991 ======================================================== 00:33:42.991 Total : 42.68 0.02 23526.87 114.15 46001.87 00:33:42.991 00:33:42.991 13:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:42.991 13:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:52.990 Initializing NVMe Controllers 00:33:52.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:52.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:52.990 Initialization complete. Launching workers. 00:33:52.990 ======================================================== 00:33:52.990 Latency(us) 00:33:52.990 Device Information : IOPS MiB/s Average min max 00:33:52.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.80 8.60 14546.08 6255.31 48023.92 00:33:52.990 ======================================================== 00:33:52.991 Total : 68.80 8.60 14546.08 6255.31 48023.92 00:33:52.991 00:33:52.991 13:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:52.991 13:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:52.991 13:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:02.994 Initializing NVMe Controllers 00:34:02.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:02.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:02.994 Initialization complete. Launching workers. 00:34:02.994 ======================================================== 00:34:02.994 Latency(us) 00:34:02.994 Device Information : IOPS MiB/s Average min max 00:34:02.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8733.20 4.26 3664.31 400.98 7832.68 00:34:02.994 ======================================================== 00:34:02.994 Total : 8733.20 4.26 3664.31 400.98 7832.68 00:34:02.994 00:34:02.994 13:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:02.994 13:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:13.003 Initializing NVMe Controllers 00:34:13.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:13.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:13.003 Initialization complete. Launching workers. 00:34:13.003 ======================================================== 00:34:13.003 Latency(us) 00:34:13.003 Device Information : IOPS MiB/s Average min max 00:34:13.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3836.69 479.59 8344.53 561.98 27142.04 00:34:13.003 ======================================================== 00:34:13.003 Total : 3836.69 479.59 8344.53 561.98 27142.04 00:34:13.003 00:34:13.003 13:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:34:13.003 13:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:13.003 13:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:25.241 Initializing NVMe Controllers 00:34:25.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:25.241 Controller IO queue size 128, less than required. 00:34:25.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:25.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:25.241 Initialization complete. Launching workers. 00:34:25.241 ======================================================== 00:34:25.241 Latency(us) 00:34:25.241 Device Information : IOPS MiB/s Average min max 00:34:25.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15792.77 7.71 8109.23 1816.28 22739.75 00:34:25.241 ======================================================== 00:34:25.241 Total : 15792.77 7.71 8109.23 1816.28 22739.75 00:34:25.241 00:34:25.241 13:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:25.241 13:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:35.249 Initializing NVMe Controllers 00:34:35.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:35.249 Controller IO queue size 128, less than required. 00:34:35.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:35.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:35.249 Initialization complete. Launching workers. 00:34:35.249 ======================================================== 00:34:35.249 Latency(us) 00:34:35.249 Device Information : IOPS MiB/s Average min max 00:34:35.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.60 150.57 106920.80 23691.77 206463.57 00:34:35.249 ======================================================== 00:34:35.249 Total : 1204.60 150.57 106920.80 23691.77 206463.57 00:34:35.249 00:34:35.249 13:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.249 13:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32dcc7d1-6872-4b64-9775-5bf99cbd0dd3 00:34:35.510 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:35.770 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cad13cd5-f77e-4db7-853e-946a6f73ddbc 00:34:35.771 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.032 rmmod nvme_tcp 00:34:36.032 rmmod nvme_fabrics 00:34:36.032 rmmod nvme_keyring 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3574071 ']' 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3574071 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3574071 ']' 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3574071 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574071 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574071' 00:34:36.032 killing process with pid 3574071 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3574071 00:34:36.032 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3574071 00:34:38.129 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.130 13:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.674 00:34:40.674 real 1m34.371s 00:34:40.674 user 5m32.666s 00:34:40.674 sys 0m16.371s 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:40.674 ************************************ 00:34:40.674 END TEST nvmf_perf 00:34:40.674 ************************************ 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.674 ************************************ 00:34:40.674 START TEST nvmf_fio_host 00:34:40.674 ************************************ 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:40.674 * Looking for test storage... 00:34:40.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:40.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.674 --rc genhtml_branch_coverage=1 00:34:40.674 --rc genhtml_function_coverage=1 00:34:40.674 --rc genhtml_legend=1 00:34:40.674 --rc geninfo_all_blocks=1 00:34:40.674 --rc geninfo_unexecuted_blocks=1 00:34:40.674 00:34:40.674 ' 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:40.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.674 --rc genhtml_branch_coverage=1 00:34:40.674 --rc genhtml_function_coverage=1 00:34:40.674 --rc genhtml_legend=1 00:34:40.674 --rc geninfo_all_blocks=1 00:34:40.674 --rc geninfo_unexecuted_blocks=1 00:34:40.674 00:34:40.674 ' 00:34:40.674 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:40.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.674 --rc genhtml_branch_coverage=1 00:34:40.674 --rc genhtml_function_coverage=1 00:34:40.674 --rc genhtml_legend=1 00:34:40.675 --rc geninfo_all_blocks=1 00:34:40.675 --rc geninfo_unexecuted_blocks=1 00:34:40.675 00:34:40.675 ' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:40.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.675 --rc genhtml_branch_coverage=1 00:34:40.675 --rc genhtml_function_coverage=1 00:34:40.675 --rc genhtml_legend=1 00:34:40.675 --rc geninfo_all_blocks=1 00:34:40.675 --rc geninfo_unexecuted_blocks=1 00:34:40.675 00:34:40.675 ' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:40.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:40.675 13:05:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:48.818 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:48.818 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:48.818 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:48.818 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:48.818 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:34:48.819 00:34:48.819 --- 10.0.0.2 ping statistics --- 00:34:48.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.819 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:48.819 00:34:48.819 --- 10.0.0.1 ping statistics --- 00:34:48.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.819 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.819 13:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3594185 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3594185 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3594185 ']' 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.819 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.819 [2024-11-28 13:05:18.110639] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:34:48.819 [2024-11-28 13:05:18.110705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.819 [2024-11-28 13:05:18.255874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:48.819 [2024-11-28 13:05:18.313685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.819 [2024-11-28 13:05:18.341416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.819 [2024-11-28 13:05:18.341458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.819 [2024-11-28 13:05:18.341466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.819 [2024-11-28 13:05:18.341473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.819 [2024-11-28 13:05:18.341480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.819 [2024-11-28 13:05:18.343334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.819 [2024-11-28 13:05:18.343493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.819 [2024-11-28 13:05:18.343653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.819 [2024-11-28 13:05:18.343653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:49.080 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.080 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:34:49.080 13:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:49.080 [2024-11-28 13:05:19.107792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.080 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:49.080 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:49.080 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.080 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:49.342 Malloc1 00:34:49.342 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:49.602 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:49.863 13:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.863 [2024-11-28 13:05:19.978286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:50.123 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.124 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:50.124 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:50.124 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:50.400 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:50.400 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:50.400 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:50.400 13:05:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:50.660 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:50.660 fio-3.35 00:34:50.660 Starting 1 thread 00:34:53.233 00:34:53.233 test: (groupid=0, jobs=1): err= 0: pid=3594787: Thu Nov 28 13:05:23 2024 00:34:53.233 read: IOPS=13.6k, BW=53.1MiB/s (55.7MB/s)(106MiB/2005msec) 00:34:53.233 slat (usec): min=2, max=290, avg= 2.16, stdev= 2.51 00:34:53.233 clat (usec): min=3317, max=8975, avg=5192.17, stdev=394.44 00:34:53.233 lat (usec): min=3319, max=8981, avg=5194.33, stdev=394.67 00:34:53.233 clat percentiles (usec): 00:34:53.233 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:34:53.233 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:34:53.233 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5735], 00:34:53.233 | 99.00th=[ 6063], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 8717], 00:34:53.233 | 99.99th=[ 8979] 00:34:53.233 bw ( KiB/s): min=53248, max=54920, per=100.00%, avg=54362.00, stdev=755.69, samples=4 00:34:53.233 iops : min=13312, max=13730, avg=13590.50, stdev=188.92, samples=4 00:34:53.233 write: IOPS=13.6k, BW=53.0MiB/s (55.6MB/s)(106MiB/2005msec); 0 zone resets 00:34:53.233 slat (usec): min=2, max=268, avg= 2.24, stdev= 1.80 00:34:53.233 clat (usec): min=2813, max=8152, avg=4194.42, stdev=350.48 00:34:53.233 lat (usec): min=2816, max=8154, avg=4196.66, stdev=350.75 00:34:53.233 clat percentiles (usec): 00:34:53.233 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3949], 00:34:53.233 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:34:53.233 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:34:53.233 | 99.00th=[ 5014], 99.50th=[ 5997], 99.90th=[ 7373], 99.95th=[ 7504], 00:34:53.233 | 99.99th=[ 8094] 00:34:53.233 bw ( KiB/s): min=53696, max=54728, per=100.00%, avg=54326.00, stdev=441.83, samples=4 00:34:53.233 iops : min=13424, max=13682, avg=13581.50, stdev=110.46, samples=4 00:34:53.233 lat (msec) : 4=12.67%, 10=87.33% 00:34:53.233 cpu : usr=74.95%, sys=23.80%, ctx=35, majf=0, minf=27 00:34:53.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.233 issued rwts: total=27247,27224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.233 00:34:53.233 Run status group 0 (all jobs): 00:34:53.233 READ: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=106MiB (112MB), run=2005-2005msec 00:34:53.233 WRITE: bw=53.0MiB/s (55.6MB/s), 53.0MiB/s-53.0MiB/s (55.6MB/s-55.6MB/s), io=106MiB (112MB), run=2005-2005msec 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:53.233 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:53.234 13:05:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:53.498 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:53.498 fio-3.35 00:34:53.498 Starting 1 thread 00:34:56.044 00:34:56.044 test: (groupid=0, jobs=1): err= 0: pid=3595543: Thu Nov 28 13:05:25 2024 00:34:56.044 read: IOPS=9589, BW=150MiB/s (157MB/s)(300MiB/2003msec) 00:34:56.044 slat (usec): min=3, max=111, avg= 3.60, stdev= 1.57 00:34:56.044 clat (usec): min=1783, max=15555, avg=8090.95, stdev=1929.73 00:34:56.044 lat (usec): min=1786, max=15558, avg=8094.55, stdev=1929.85 00:34:56.044 clat percentiles (usec): 00:34:56.044 | 1.00th=[ 4113], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6325], 00:34:56.044 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8455], 00:34:56.044 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11469], 00:34:56.044 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14484], 99.95th=[14877], 00:34:56.044 | 99.99th=[15270] 00:34:56.044 bw ( KiB/s): min=71232, max=84352, per=49.73%, avg=76304.00, stdev=5675.86, samples=4 00:34:56.044 iops : min= 4452, max= 5272, avg=4769.00, stdev=354.74, samples=4 00:34:56.044 write: IOPS=5752, BW=89.9MiB/s (94.3MB/s)(156MiB/1736msec); 0 zone resets 00:34:56.044 slat (usec): min=39, max=369, avg=40.83, stdev= 6.90 00:34:56.044 clat (usec): min=1882, max=14709, avg=9009.09, stdev=1329.94 00:34:56.044 lat (usec): min=1921, max=14815, avg=9049.92, stdev=1331.30 00:34:56.044 clat percentiles (usec): 00:34:56.044 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7832], 00:34:56.045 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:34:56.045 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10552], 95.00th=[11076], 00:34:56.045 | 99.00th=[12256], 99.50th=[12911], 99.90th=[14222], 99.95th=[14484], 00:34:56.045 | 99.99th=[14746] 00:34:56.045 bw ( KiB/s): min=74368, max=88064, per=86.31%, avg=79448.00, stdev=6024.24, samples=4 00:34:56.045 iops : min= 4648, max= 5504, avg=4965.50, stdev=376.52, samples=4 00:34:56.045 lat (msec) : 2=0.03%, 4=0.69%, 10=79.30%, 20=19.98% 00:34:56.045 cpu : usr=85.11%, sys=13.69%, ctx=14, majf=0, minf=55 00:34:56.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:34:56.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:56.045 issued rwts: total=19207,9987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:56.045 00:34:56.045 Run status group 0 (all jobs): 00:34:56.045 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=300MiB (315MB), run=2003-2003msec 00:34:56.045 WRITE: bw=89.9MiB/s (94.3MB/s), 89.9MiB/s-89.9MiB/s (94.3MB/s-94.3MB/s), io=156MiB (164MB), run=1736-1736msec 00:34:56.045 13:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:34:56.045 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:34:56.615 Nvme0n1 00:34:56.615 13:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c15c28c0-4ea6-414e-aa45-119006ff5f00 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c15c28c0-4ea6-414e-aa45-119006ff5f00 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c15c28c0-4ea6-414e-aa45-119006ff5f00 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:34:57.185 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:57.444 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:34:57.444 { 00:34:57.444 "uuid": "c15c28c0-4ea6-414e-aa45-119006ff5f00", 00:34:57.444 "name": "lvs_0", 00:34:57.444 "base_bdev": "Nvme0n1", 00:34:57.444 "total_data_clusters": 1787, 00:34:57.444 "free_clusters": 1787, 00:34:57.444 "block_size": 512, 00:34:57.444 "cluster_size": 1073741824 00:34:57.444 } 00:34:57.444 ]' 00:34:57.444 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c15c28c0-4ea6-414e-aa45-119006ff5f00") .free_clusters' 00:34:57.444 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1787 00:34:57.444 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c15c28c0-4ea6-414e-aa45-119006ff5f00") .cluster_size' 00:34:57.444 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:34:57.444 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1829888 00:34:57.445 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1829888 00:34:57.445 1829888 00:34:57.445 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:34:57.704 f2d383a9-971f-4ba1-8bcd-574b06f7e0df 00:34:57.704 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:57.704 13:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:57.965 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.226 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:58.227 13:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:58.488 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:58.488 fio-3.35 00:34:58.488 Starting 1 thread 00:35:01.033 00:35:01.033 test: (groupid=0, jobs=1): err= 0: pid=3596740: Thu Nov 28 13:05:30 2024 00:35:01.033 read: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(79.9MiB/2006msec) 00:35:01.033 slat (usec): min=2, max=116, avg= 2.26, stdev= 1.14 00:35:01.033 clat (usec): min=2541, max=11779, avg=6923.04, stdev=515.84 00:35:01.033 lat (usec): min=2558, max=11781, avg=6925.30, stdev=515.78 00:35:01.033 clat percentiles (usec): 00:35:01.033 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6521], 00:35:01.033 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:35:01.033 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7701], 00:35:01.033 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[ 9372], 99.95th=[10290], 00:35:01.033 | 99.99th=[11076] 00:35:01.033 bw ( KiB/s): min=39712, max=41416, per=99.92%, avg=40760.00, stdev=740.42, samples=4 00:35:01.033 iops : min= 9928, max=10354, avg=10190.00, stdev=185.11, samples=4 00:35:01.033 write: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(79.9MiB/2006msec); 0 zone resets 00:35:01.033 slat (nsec): min=2105, max=98182, avg=2328.76, stdev=746.36 00:35:01.033 clat (usec): min=1028, max=10257, avg=5535.23, stdev=443.03 00:35:01.033 lat (usec): min=1035, max=10260, avg=5537.56, stdev=443.01 00:35:01.033 clat percentiles (usec): 00:35:01.033 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5211], 00:35:01.033 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5669], 00:35:01.033 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 6063], 95.00th=[ 6194], 00:35:01.033 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 7701], 99.95th=[ 9241], 00:35:01.033 | 99.99th=[10159] 00:35:01.033 bw ( KiB/s): min=40336, max=41304, per=100.00%, avg=40836.00, stdev=411.80, samples=4 00:35:01.033 iops : min=10084, max=10326, avg=10209.00, stdev=102.95, samples=4 00:35:01.033 lat (msec) : 2=0.02%, 4=0.11%, 10=99.82%, 20=0.04% 00:35:01.033 cpu : usr=72.02%, sys=26.98%, ctx=46, majf=0, minf=27 00:35:01.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:01.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:01.033 issued rwts: total=20458,20467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:01.033 00:35:01.033 Run status group 0 (all jobs): 00:35:01.033 READ: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.8MB), run=2006-2006msec 00:35:01.033 WRITE: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.8MB), run=2006-2006msec 00:35:01.033 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:01.294 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:35:01.867 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e0d5b9a3-bec1-4ae7-b1b4-2b1494dab0a0 00:35:02.128 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e0d5b9a3-bec1-4ae7-b1b4-2b1494dab0a0 00:35:02.128 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=e0d5b9a3-bec1-4ae7-b1b4-2b1494dab0a0 00:35:02.128 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:35:02.128 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:35:02.128 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:35:02.128 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:02.128 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:35:02.128 { 00:35:02.128 "uuid": "c15c28c0-4ea6-414e-aa45-119006ff5f00", 00:35:02.128 "name": "lvs_0", 00:35:02.128 "base_bdev": "Nvme0n1", 00:35:02.128 "total_data_clusters": 1787, 00:35:02.128 "free_clusters": 0, 00:35:02.128 "block_size": 512, 00:35:02.128 "cluster_size": 1073741824 00:35:02.128 }, 00:35:02.128 { 00:35:02.128 "uuid": "e0d5b9a3-bec1-4ae7-b1b4-2b1494dab0a0", 00:35:02.128 "name": "lvs_n_0", 00:35:02.128 "base_bdev": "f2d383a9-971f-4ba1-8bcd-574b06f7e0df", 00:35:02.128 "total_data_clusters": 457025, 00:35:02.128 "free_clusters": 457025, 00:35:02.128 "block_size": 512, 00:35:02.128 "cluster_size": 4194304 00:35:02.128 } 00:35:02.128 ]' 00:35:02.128 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="e0d5b9a3-bec1-4ae7-b1b4-2b1494dab0a0") .free_clusters' 00:35:02.128 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=457025 00:35:02.128 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="e0d5b9a3-bec1-4ae7-b1b4-2b1494dab0a0") .cluster_size' 00:35:02.389 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:35:02.389 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1828100 00:35:02.389 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1828100 00:35:02.389 1828100 00:35:02.389 13:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:35:02.960 0beefe12-216d-4119-8fff-1c6c40b3f7c3 00:35:02.960 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:35:03.221 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:35:03.483 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:03.744 13:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:04.012 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:04.012 fio-3.35 00:35:04.012 Starting 1 thread 00:35:06.556 00:35:06.556 test: (groupid=0, jobs=1): err= 0: pid=3597922: Thu Nov 28 13:05:36 2024 00:35:06.556 read: IOPS=9098, BW=35.5MiB/s (37.3MB/s)(71.3MiB/2006msec) 00:35:06.556 slat (usec): min=2, max=117, avg= 2.21, stdev= 1.22 00:35:06.556 clat (usec): min=2808, max=12724, avg=7765.32, stdev=606.22 00:35:06.556 lat (usec): min=2825, max=12726, avg=7767.53, stdev=606.16 00:35:06.556 clat percentiles (usec): 00:35:06.556 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:35:06.556 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:35:06.556 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:35:06.556 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[ 9896], 99.95th=[10945], 00:35:06.556 | 99.99th=[12649] 00:35:06.556 bw ( KiB/s): min=35344, max=37016, per=99.95%, avg=36374.00, stdev=720.31, samples=4 00:35:06.556 iops : min= 8836, max= 9254, avg=9093.50, stdev=180.08, samples=4 00:35:06.556 write: IOPS=9113, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2006msec); 0 zone resets 00:35:06.556 slat (nsec): min=2105, max=108399, avg=2280.64, stdev=832.02 00:35:06.556 clat (usec): min=1061, max=11472, avg=6207.77, stdev=521.75 00:35:06.556 lat (usec): min=1069, max=11475, avg=6210.05, stdev=521.72 00:35:06.556 clat percentiles (usec): 00:35:06.556 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:35:06.556 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:35:06.556 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 6980], 00:35:06.556 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9634], 99.95th=[10683], 00:35:06.556 | 99.99th=[11338] 00:35:06.556 bw ( KiB/s): min=36064, max=36672, per=99.95%, avg=36436.00, stdev=273.84, samples=4 00:35:06.556 iops : min= 9016, max= 9168, avg=9109.00, stdev=68.46, samples=4 00:35:06.556 lat (msec) : 2=0.01%, 4=0.08%, 10=99.84%, 20=0.07% 00:35:06.556 cpu : usr=72.42%, sys=26.68%, ctx=58, majf=0, minf=27 00:35:06.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:06.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.556 issued rwts: total=18251,18281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.556 00:35:06.556 Run status group 0 (all jobs): 00:35:06.556 READ: bw=35.5MiB/s (37.3MB/s), 35.5MiB/s-35.5MiB/s (37.3MB/s-37.3MB/s), io=71.3MiB (74.8MB), run=2006-2006msec 00:35:06.556 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.9MB), run=2006-2006msec 00:35:06.556 13:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:35:06.817 13:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:35:06.817 13:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:35:08.729 13:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:35:08.729 13:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:35:09.300 13:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:35:09.562 13:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:35:11.472 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:11.472 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:35:11.472 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:35:11.472 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.472 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.731 rmmod nvme_tcp 00:35:11.731 rmmod nvme_fabrics 00:35:11.731 rmmod nvme_keyring 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3594185 ']' 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3594185 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3594185 ']' 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3594185 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3594185 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3594185' 00:35:11.731 killing process with pid 3594185 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3594185 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3594185 00:35:11.731 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.991 13:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.909 00:35:13.909 real 0m33.677s 00:35:13.909 user 2m32.426s 00:35:13.909 sys 0m10.270s 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.909 ************************************ 00:35:13.909 END TEST nvmf_fio_host 00:35:13.909 ************************************ 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.909 13:05:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.909 ************************************ 00:35:13.909 START TEST nvmf_failover 00:35:13.909 ************************************ 00:35:13.909 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:14.171 * Looking for test storage... 00:35:14.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:14.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.171 --rc genhtml_branch_coverage=1 00:35:14.171 --rc genhtml_function_coverage=1 00:35:14.171 --rc genhtml_legend=1 00:35:14.171 --rc geninfo_all_blocks=1 00:35:14.171 --rc geninfo_unexecuted_blocks=1 00:35:14.171 00:35:14.171 ' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:14.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.171 --rc genhtml_branch_coverage=1 00:35:14.171 --rc genhtml_function_coverage=1 00:35:14.171 --rc genhtml_legend=1 00:35:14.171 --rc geninfo_all_blocks=1 00:35:14.171 --rc geninfo_unexecuted_blocks=1 00:35:14.171 00:35:14.171 ' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:14.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.171 --rc genhtml_branch_coverage=1 00:35:14.171 --rc genhtml_function_coverage=1 00:35:14.171 --rc genhtml_legend=1 00:35:14.171 --rc geninfo_all_blocks=1 00:35:14.171 --rc geninfo_unexecuted_blocks=1 00:35:14.171 00:35:14.171 ' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:14.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.171 --rc genhtml_branch_coverage=1 00:35:14.171 --rc genhtml_function_coverage=1 00:35:14.171 --rc genhtml_legend=1 00:35:14.171 --rc geninfo_all_blocks=1 00:35:14.171 --rc geninfo_unexecuted_blocks=1 00:35:14.171 00:35:14.171 ' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.171 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:35:14.172 13:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:22.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:22.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:22.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:22.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:22.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:35:22.334 00:35:22.334 --- 10.0.0.2 ping statistics --- 00:35:22.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.334 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:35:22.334 00:35:22.334 --- 10.0.0.1 ping statistics --- 00:35:22.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.334 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.334 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3603554 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3603554 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3603554 ']' 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.335 13:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.335 [2024-11-28 13:05:51.861948] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:35:22.335 [2024-11-28 13:05:51.862022] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.335 [2024-11-28 13:05:52.006812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:22.335 [2024-11-28 13:05:52.064908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:22.335 [2024-11-28 13:05:52.091934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.335 [2024-11-28 13:05:52.091976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.335 [2024-11-28 13:05:52.091985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.335 [2024-11-28 13:05:52.091992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.335 [2024-11-28 13:05:52.091998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.335 [2024-11-28 13:05:52.093707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.335 [2024-11-28 13:05:52.093868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.335 [2024-11-28 13:05:52.093870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:22.595 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.595 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:35:22.595 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:22.595 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:22.595 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.856 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:22.856 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:22.856 [2024-11-28 13:05:52.901164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.856 13:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:23.116 Malloc0 00:35:23.116 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.377 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:23.638 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.638 [2024-11-28 13:05:53.712601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.638 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:23.899 [2024-11-28 13:05:53.912727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:23.899 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:24.160 [2024-11-28 13:05:54.100975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3603950 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3603950 /var/tmp/bdevperf.sock 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3603950 ']' 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:24.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.160 13:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:25.181 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.181 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:35:25.181 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:25.181 NVMe0n1 00:35:25.442 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:25.702 00:35:25.702 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3604285 00:35:25.702 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:25.702 13:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:35:26.643 13:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.904 [2024-11-28 13:05:56.819855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.904 [2024-11-28 13:05:56.819960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.819996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 [2024-11-28 13:05:56.820180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278780 is same with the state(6) to be set 00:35:26.905 13:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:35:30.209 13:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:30.209 00:35:30.209 13:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:30.470 [2024-11-28 13:06:00.349871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.470 [2024-11-28 13:06:00.349945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.349996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.471 [2024-11-28 13:06:00.350341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 [2024-11-28 13:06:00.350378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12794a0 is same with the state(6) to be set 00:35:30.472 13:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:33.799 13:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.800 [2024-11-28 13:06:03.544806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.800 13:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:34.744 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:34.744 [2024-11-28 13:06:04.736690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.744 [2024-11-28 13:06:04.736856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.736999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.737004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 [2024-11-28 13:06:04.737008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c56c0 is same with the state(6) to be set 00:35:34.745 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3604285 00:35:41.339 { 00:35:41.339 "results": [ 00:35:41.339 { 00:35:41.339 "job": "NVMe0n1", 00:35:41.339 "core_mask": "0x1", 00:35:41.339 "workload": "verify", 00:35:41.339 "status": "finished", 00:35:41.339 "verify_range": { 00:35:41.339 "start": 0, 00:35:41.339 "length": 16384 00:35:41.339 }, 00:35:41.339 "queue_depth": 128, 00:35:41.339 "io_size": 4096, 00:35:41.339 "runtime": 15.006753, 00:35:41.339 "iops": 12232.026475014281, 00:35:41.339 "mibps": 47.78135341802454, 00:35:41.339 "io_failed": 17348, 00:35:41.339 "io_timeout": 0, 00:35:41.339 "avg_latency_us": 9539.576924669109, 00:35:41.339 "min_latency_us": 557.674574006014, 00:35:41.339 "max_latency_us": 19268.853992649514 00:35:41.339 } 00:35:41.339 ], 00:35:41.339 "core_count": 1 00:35:41.339 } 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3603950 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3603950 ']' 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3603950 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603950 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603950' 00:35:41.339 killing process with pid 3603950 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3603950 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3603950 00:35:41.339 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:41.339 [2024-11-28 13:05:54.164917] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:35:41.339 [2024-11-28 13:05:54.164987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603950 ] 00:35:41.339 [2024-11-28 13:05:54.293740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:41.339 [2024-11-28 13:05:54.352370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.339 [2024-11-28 13:05:54.380224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.339 Running I/O for 15 seconds... 00:35:41.339 11828.00 IOPS, 46.20 MiB/s [2024-11-28T12:06:11.466Z] [2024-11-28 13:05:56.820875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.339 [2024-11-28 13:05:56.820908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.820918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.339 [2024-11-28 13:05:56.820926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.820935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.339 [2024-11-28 13:05:56.820942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.820951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.339 [2024-11-28 13:05:56.820958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.820966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20edb30 is same with the state(6) to be set 00:35:41.339 [2024-11-28 13:05:56.821020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.339 [2024-11-28 13:05:56.821031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.821044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.339 [2024-11-28 13:05:56.821052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.821061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.339 [2024-11-28 13:05:56.821069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.821079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.339 [2024-11-28 13:05:56.821086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.339 [2024-11-28 13:05:56.821095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.340 [2024-11-28 13:05:56.821767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.340 [2024-11-28 13:05:56.821776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.821989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.821997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.341 [2024-11-28 13:05:56.822454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.341 [2024-11-28 13:05:56.822462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.342 [2024-11-28 13:05:56.822834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.822987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.822996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.342 [2024-11-28 13:05:56.823132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.342 [2024-11-28 13:05:56.823139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:05:56.823149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.343 [2024-11-28 13:05:56.823156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:05:56.823170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.343 [2024-11-28 13:05:56.823177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:05:56.823187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.343 [2024-11-28 13:05:56.823194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:05:56.823215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.343 [2024-11-28 13:05:56.823222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.343 [2024-11-28 13:05:56.823229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102160 len:8 PRP1 0x0 PRP2 0x0 00:35:41.343 [2024-11-28 13:05:56.823236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:05:56.823277] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:41.343 [2024-11-28 13:05:56.823287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:41.343 [2024-11-28 13:05:56.826876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:41.343 [2024-11-28 13:05:56.826899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20edb30 (9): Bad file descriptor 00:35:41.343 [2024-11-28 13:05:56.850106] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:35:41.343 11416.00 IOPS, 44.59 MiB/s [2024-11-28T12:06:11.470Z] 11270.33 IOPS, 44.02 MiB/s [2024-11-28T12:06:11.470Z] 11651.25 IOPS, 45.51 MiB/s [2024-11-28T12:06:11.470Z] [2024-11-28 13:06:00.351934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.351972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.351985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.351991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.351998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.343 [2024-11-28 13:06:00.352282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.343 [2024-11-28 13:06:00.352288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.344 [2024-11-28 13:06:00.352293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.344 [2024-11-28 13:06:00.352305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.344 [2024-11-28 13:06:00.352316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.344 [2024-11-28 13:06:00.352328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.344 [2024-11-28 13:06:00.352735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.344 [2024-11-28 13:06:00.352742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.352985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.345 [2024-11-28 13:06:00.352990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50080 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50088 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50096 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50104 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50112 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50120 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50128 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50136 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50144 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50152 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.345 [2024-11-28 13:06:00.353209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.345 [2024-11-28 13:06:00.353213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50160 len:8 PRP1 0x0 PRP2 0x0 00:35:41.345 [2024-11-28 13:06:00.353218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.345 [2024-11-28 13:06:00.353224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50168 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50176 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50184 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50192 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50200 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50208 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50216 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50224 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50232 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50240 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50248 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50256 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50264 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50272 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.353494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50280 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.353499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.353505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.353508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50288 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50296 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50304 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50312 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50320 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50328 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50336 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50344 len:8 PRP1 0x0 PRP2 0x0 00:35:41.346 [2024-11-28 13:06:00.365694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.346 [2024-11-28 13:06:00.365701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.346 [2024-11-28 13:06:00.365706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.346 [2024-11-28 13:06:00.365712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50352 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.347 [2024-11-28 13:06:00.365731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.347 [2024-11-28 13:06:00.365736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50360 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.347 [2024-11-28 13:06:00.365755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.347 [2024-11-28 13:06:00.365761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50368 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.347 [2024-11-28 13:06:00.365780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.347 [2024-11-28 13:06:00.365786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50376 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.347 [2024-11-28 13:06:00.365805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.347 [2024-11-28 13:06:00.365810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50384 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.347 [2024-11-28 13:06:00.365829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.347 [2024-11-28 13:06:00.365835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50392 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.347 [2024-11-28 13:06:00.365855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.347 [2024-11-28 13:06:00.365861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50400 len:8 PRP1 0x0 PRP2 0x0 00:35:41.347 [2024-11-28 13:06:00.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365909] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:35:41.347 [2024-11-28 13:06:00.365938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.347 [2024-11-28 13:06:00.365946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.347 [2024-11-28 13:06:00.365962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.347 [2024-11-28 13:06:00.365976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.347 [2024-11-28 13:06:00.365990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:00.365997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:35:41.347 [2024-11-28 13:06:00.366037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20edb30 (9): Bad file descriptor 00:35:41.347 [2024-11-28 13:06:00.369305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:35:41.347 [2024-11-28 13:06:00.552344] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:35:41.347 11345.80 IOPS, 44.32 MiB/s [2024-11-28T12:06:11.474Z] 11548.33 IOPS, 45.11 MiB/s [2024-11-28T12:06:11.474Z] 11726.00 IOPS, 45.80 MiB/s [2024-11-28T12:06:11.474Z] 11851.50 IOPS, 46.29 MiB/s [2024-11-28T12:06:11.474Z] [2024-11-28 13:06:04.737678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.347 [2024-11-28 13:06:04.737962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.347 [2024-11-28 13:06:04.737969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.737974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.737980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.737985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.737992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.737997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.348 [2024-11-28 13:06:04.738370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.348 [2024-11-28 13:06:04.738375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.349 [2024-11-28 13:06:04.738834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.349 [2024-11-28 13:06:04.738841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.738926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.738939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.738950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.738973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.738985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.738991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.738996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.739007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.350 [2024-11-28 13:06:04.739019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.350 [2024-11-28 13:06:04.739183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.350 [2024-11-28 13:06:04.739204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.350 [2024-11-28 13:06:04.739208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:35:41.350 [2024-11-28 13:06:04.739214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739252] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:35:41.350 [2024-11-28 13:06:04.739268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.350 [2024-11-28 13:06:04.739275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.350 [2024-11-28 13:06:04.739287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.350 [2024-11-28 13:06:04.739298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:41.350 [2024-11-28 13:06:04.739309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.350 [2024-11-28 13:06:04.739315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:35:41.350 [2024-11-28 13:06:04.741766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:35:41.350 [2024-11-28 13:06:04.741786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20edb30 (9): Bad file descriptor 00:35:41.350 12002.11 IOPS, 46.88 MiB/s [2024-11-28T12:06:11.477Z] [2024-11-28 13:06:04.858880] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:35:41.350 11945.90 IOPS, 46.66 MiB/s [2024-11-28T12:06:11.477Z] 12039.82 IOPS, 47.03 MiB/s [2024-11-28T12:06:11.477Z] 12093.50 IOPS, 47.24 MiB/s [2024-11-28T12:06:11.477Z] 12153.23 IOPS, 47.47 MiB/s [2024-11-28T12:06:11.477Z] 12190.00 IOPS, 47.62 MiB/s [2024-11-28T12:06:11.477Z] 12230.93 IOPS, 47.78 MiB/s 00:35:41.350 Latency(us) 00:35:41.350 [2024-11-28T12:06:11.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.350 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:41.350 Verification LBA range: start 0x0 length 0x4000 00:35:41.351 NVMe0n1 : 15.01 12232.03 47.78 1156.01 0.00 9539.58 557.67 19268.85 00:35:41.351 [2024-11-28T12:06:11.478Z] =================================================================================================================== 00:35:41.351 [2024-11-28T12:06:11.478Z] Total : 12232.03 47.78 1156.01 0.00 9539.58 557.67 19268.85 00:35:41.351 Received shutdown signal, test time was about 15.000000 seconds 00:35:41.351 00:35:41.351 Latency(us) 00:35:41.351 [2024-11-28T12:06:11.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.351 [2024-11-28T12:06:11.478Z] =================================================================================================================== 00:35:41.351 [2024-11-28T12:06:11.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3607565 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3607565 /var/tmp/bdevperf.sock 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3607565 ']' 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:41.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.351 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:41.923 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.923 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:35:41.923 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:41.923 [2024-11-28 13:06:11.961047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:41.923 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:42.184 [2024-11-28 13:06:12.137099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:42.184 13:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:42.444 NVMe0n1 00:35:42.444 13:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:43.013 00:35:43.013 13:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:43.273 00:35:43.273 13:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:43.273 13:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:43.534 13:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:43.794 13:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:47.096 13:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:47.096 13:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:47.096 13:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3608855 00:35:47.096 13:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:47.096 13:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3608855 00:35:48.038 { 00:35:48.038 "results": [ 00:35:48.038 { 00:35:48.038 "job": "NVMe0n1", 00:35:48.038 "core_mask": "0x1", 00:35:48.038 "workload": "verify", 00:35:48.038 "status": "finished", 00:35:48.038 "verify_range": { 00:35:48.038 "start": 0, 00:35:48.038 "length": 16384 00:35:48.038 }, 00:35:48.038 "queue_depth": 128, 00:35:48.038 "io_size": 4096, 00:35:48.038 "runtime": 1.00575, 00:35:48.038 "iops": 12610.489684315187, 00:35:48.038 "mibps": 49.2597253293562, 00:35:48.038 "io_failed": 0, 00:35:48.038 "io_timeout": 0, 00:35:48.038 "avg_latency_us": 10112.085172111361, 00:35:48.038 "min_latency_us": 2217.013030404277, 00:35:48.038 "max_latency_us": 8430.123621784163 00:35:48.038 } 00:35:48.038 ], 00:35:48.039 "core_count": 1 00:35:48.039 } 00:35:48.039 13:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:48.039 [2024-11-28 13:06:11.004891] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:35:48.039 [2024-11-28 13:06:11.004952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607565 ] 00:35:48.039 [2024-11-28 13:06:11.137749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:48.039 [2024-11-28 13:06:11.190939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.039 [2024-11-28 13:06:11.206636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.039 [2024-11-28 13:06:13.655685] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:48.039 [2024-11-28 13:06:13.655722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.039 [2024-11-28 13:06:13.655730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.039 [2024-11-28 13:06:13.655737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.039 [2024-11-28 13:06:13.655743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.039 [2024-11-28 13:06:13.655748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.039 [2024-11-28 13:06:13.655753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.039 [2024-11-28 13:06:13.655759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.039 [2024-11-28 13:06:13.655764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.039 [2024-11-28 13:06:13.655770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:35:48.039 [2024-11-28 13:06:13.655790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:35:48.039 [2024-11-28 13:06:13.655801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e6b30 (9): Bad file descriptor 00:35:48.039 [2024-11-28 13:06:13.664442] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:35:48.039 Running I/O for 1 seconds... 00:35:48.039 12555.00 IOPS, 49.04 MiB/s 00:35:48.039 Latency(us) 00:35:48.039 [2024-11-28T12:06:18.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.039 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:48.039 Verification LBA range: start 0x0 length 0x4000 00:35:48.039 NVMe0n1 : 1.01 12610.49 49.26 0.00 0.00 10112.09 2217.01 8430.12 00:35:48.039 [2024-11-28T12:06:18.166Z] =================================================================================================================== 00:35:48.039 [2024-11-28T12:06:18.166Z] Total : 12610.49 49.26 0.00 0.00 10112.09 2217.01 8430.12 00:35:48.039 13:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:48.039 13:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:48.300 13:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:48.300 13:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:48.300 13:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:48.560 13:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:48.821 13:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3607565 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3607565 ']' 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3607565 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.116 13:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3607565 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3607565' 00:35:52.116 killing process with pid 3607565 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3607565 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3607565 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:52.116 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.376 rmmod nvme_tcp 00:35:52.376 rmmod nvme_fabrics 00:35:52.376 rmmod nvme_keyring 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3603554 ']' 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3603554 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3603554 ']' 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3603554 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603554 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603554' 00:35:52.376 killing process with pid 3603554 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3603554 00:35:52.376 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3603554 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.636 13:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:54.548 00:35:54.548 real 0m40.597s 00:35:54.548 user 2m4.474s 00:35:54.548 sys 0m8.760s 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:54.548 ************************************ 00:35:54.548 END TEST nvmf_failover 00:35:54.548 ************************************ 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:54.548 13:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.809 ************************************ 00:35:54.809 START TEST nvmf_host_discovery 00:35:54.809 ************************************ 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:54.809 * Looking for test storage... 00:35:54.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:35:54.809 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.810 --rc genhtml_branch_coverage=1 00:35:54.810 --rc genhtml_function_coverage=1 00:35:54.810 --rc genhtml_legend=1 00:35:54.810 --rc geninfo_all_blocks=1 00:35:54.810 --rc geninfo_unexecuted_blocks=1 00:35:54.810 00:35:54.810 ' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.810 --rc genhtml_branch_coverage=1 00:35:54.810 --rc genhtml_function_coverage=1 00:35:54.810 --rc genhtml_legend=1 00:35:54.810 --rc geninfo_all_blocks=1 00:35:54.810 --rc geninfo_unexecuted_blocks=1 00:35:54.810 00:35:54.810 ' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.810 --rc genhtml_branch_coverage=1 00:35:54.810 --rc genhtml_function_coverage=1 00:35:54.810 --rc genhtml_legend=1 00:35:54.810 --rc geninfo_all_blocks=1 00:35:54.810 --rc geninfo_unexecuted_blocks=1 00:35:54.810 00:35:54.810 ' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:54.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.810 --rc genhtml_branch_coverage=1 00:35:54.810 --rc genhtml_function_coverage=1 00:35:54.810 --rc genhtml_legend=1 00:35:54.810 --rc geninfo_all_blocks=1 00:35:54.810 --rc geninfo_unexecuted_blocks=1 00:35:54.810 00:35:54.810 ' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:54.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:54.810 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.071 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:03.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:03.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:03.212 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:03.212 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:03.212 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:03.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:36:03.213 00:36:03.213 --- 10.0.0.2 ping statistics --- 00:36:03.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.213 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:36:03.213 00:36:03.213 --- 10.0.0.1 ping statistics --- 00:36:03.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.213 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3613900 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3613900 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3613900 ']' 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:03.213 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.213 [2024-11-28 13:06:32.524205] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:36:03.213 [2024-11-28 13:06:32.524273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.213 [2024-11-28 13:06:32.667885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:03.213 [2024-11-28 13:06:32.726511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.213 [2024-11-28 13:06:32.752769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.213 [2024-11-28 13:06:32.752813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.213 [2024-11-28 13:06:32.752821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.213 [2024-11-28 13:06:32.752829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.213 [2024-11-28 13:06:32.752835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.213 [2024-11-28 13:06:32.753572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 [2024-11-28 13:06:33.391896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 [2024-11-28 13:06:33.404133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 null0 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 null1 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3614244 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3614244 /tmp/host.sock 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3614244 ']' 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:03.473 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:03.473 13:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.473 [2024-11-28 13:06:33.501248] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:36:03.473 [2024-11-28 13:06:33.501310] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614244 ] 00:36:03.733 [2024-11-28 13:06:33.637757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:03.733 [2024-11-28 13:06:33.695845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.733 [2024-11-28 13:06:33.724217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:04.304 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 [2024-11-28 13:06:34.672491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.565 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:04.826 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:36:04.827 13:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:36:05.397 [2024-11-28 13:06:35.347079] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:05.397 [2024-11-28 13:06:35.347117] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:05.397 [2024-11-28 13:06:35.347133] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:05.397 [2024-11-28 13:06:35.436183] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:05.656 [2024-11-28 13:06:35.659139] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:36:05.656 [2024-11-28 13:06:35.660322] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa02430:1 started. 00:36:05.656 [2024-11-28 13:06:35.662122] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:05.656 [2024-11-28 13:06:35.662149] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:05.656 [2024-11-28 13:06:35.664701] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa02430 was disconnected and freed. delete nvme_qpair. 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:05.915 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:05.915 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.175 [2024-11-28 13:06:36.112572] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa02610:1 started. 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:06.175 [2024-11-28 13:06:36.115034] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa02610 was disconnected and freed. delete nvme_qpair. 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:36:06.175 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 [2024-11-28 13:06:36.220768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:06.176 [2024-11-28 13:06:36.221514] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:06.176 [2024-11-28 13:06:36.221536] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:06.436 [2024-11-28 13:06:36.307585] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:36:06.436 13:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:36:06.436 [2024-11-28 13:06:36.409121] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:36:06.436 [2024-11-28 13:06:36.409163] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:06.436 [2024-11-28 13:06:36.409172] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:06.436 [2024-11-28 13:06:36.409178] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.376 [2024-11-28 13:06:37.477793] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:07.376 [2024-11-28 13:06:37.477815] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:07.376 [2024-11-28 13:06:37.479344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:07.376 [2024-11-28 13:06:37.479362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:07.376 [2024-11-28 13:06:37.479373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:07.376 [2024-11-28 13:06:37.479381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:07.376 [2024-11-28 13:06:37.479389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:07.376 [2024-11-28 13:06:37.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:07.376 [2024-11-28 13:06:37.479404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:07.376 [2024-11-28 13:06:37.479411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:07.376 [2024-11-28 13:06:37.479419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.376 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.377 [2024-11-28 13:06:37.489333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.377 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:07.377 [2024-11-28 13:06:37.499347] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.377 [2024-11-28 13:06:37.499361] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.377 [2024-11-28 13:06:37.499366] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.377 [2024-11-28 13:06:37.499372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.377 [2024-11-28 13:06:37.499389] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.377 [2024-11-28 13:06:37.499512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.377 [2024-11-28 13:06:37.499526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.377 [2024-11-28 13:06:37.499535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.377 [2024-11-28 13:06:37.499547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.377 [2024-11-28 13:06:37.499558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.377 [2024-11-28 13:06:37.499565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.377 [2024-11-28 13:06:37.499574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.377 [2024-11-28 13:06:37.499581] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.377 [2024-11-28 13:06:37.499587] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.377 [2024-11-28 13:06:37.499592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.642 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.642 [2024-11-28 13:06:37.509396] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.642 [2024-11-28 13:06:37.509408] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.642 [2024-11-28 13:06:37.509413] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.643 [2024-11-28 13:06:37.509418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.643 [2024-11-28 13:06:37.509432] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.643 [2024-11-28 13:06:37.509755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.643 [2024-11-28 13:06:37.509768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.643 [2024-11-28 13:06:37.509775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.643 [2024-11-28 13:06:37.509786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.643 [2024-11-28 13:06:37.509797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.643 [2024-11-28 13:06:37.509804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.643 [2024-11-28 13:06:37.509811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.643 [2024-11-28 13:06:37.509818] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.643 [2024-11-28 13:06:37.509823] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.643 [2024-11-28 13:06:37.509827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.643 [2024-11-28 13:06:37.519439] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.643 [2024-11-28 13:06:37.519451] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.643 [2024-11-28 13:06:37.519455] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.643 [2024-11-28 13:06:37.519460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.643 [2024-11-28 13:06:37.519474] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.643 [2024-11-28 13:06:37.519753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.643 [2024-11-28 13:06:37.519765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.643 [2024-11-28 13:06:37.519773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.643 [2024-11-28 13:06:37.519784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.643 [2024-11-28 13:06:37.519795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.644 [2024-11-28 13:06:37.519802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.644 [2024-11-28 13:06:37.519809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.644 [2024-11-28 13:06:37.519815] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.644 [2024-11-28 13:06:37.519821] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.644 [2024-11-28 13:06:37.519825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.644 [2024-11-28 13:06:37.529481] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.644 [2024-11-28 13:06:37.529495] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.644 [2024-11-28 13:06:37.529500] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.644 [2024-11-28 13:06:37.529505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.644 [2024-11-28 13:06:37.529523] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.644 [2024-11-28 13:06:37.529801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.644 [2024-11-28 13:06:37.529813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.644 [2024-11-28 13:06:37.529820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.644 [2024-11-28 13:06:37.529832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.644 [2024-11-28 13:06:37.529842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.644 [2024-11-28 13:06:37.529849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.644 [2024-11-28 13:06:37.529856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.644 [2024-11-28 13:06:37.529862] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.644 [2024-11-28 13:06:37.529868] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.644 [2024-11-28 13:06:37.529872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.644 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.644 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.644 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.645 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.645 [2024-11-28 13:06:37.539532] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.645 [2024-11-28 13:06:37.539544] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.645 [2024-11-28 13:06:37.539548] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.645 [2024-11-28 13:06:37.539553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.645 [2024-11-28 13:06:37.539566] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.645 [2024-11-28 13:06:37.539843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.645 [2024-11-28 13:06:37.539855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.645 [2024-11-28 13:06:37.539863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.645 [2024-11-28 13:06:37.539878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.645 [2024-11-28 13:06:37.539888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.645 [2024-11-28 13:06:37.539895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.645 [2024-11-28 13:06:37.539902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.645 [2024-11-28 13:06:37.539908] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.645 [2024-11-28 13:06:37.539913] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.645 [2024-11-28 13:06:37.539917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.645 [2024-11-28 13:06:37.549575] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.645 [2024-11-28 13:06:37.549589] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.645 [2024-11-28 13:06:37.549594] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.645 [2024-11-28 13:06:37.549599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.645 [2024-11-28 13:06:37.549614] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.646 [2024-11-28 13:06:37.549895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.646 [2024-11-28 13:06:37.549907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.646 [2024-11-28 13:06:37.549915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.646 [2024-11-28 13:06:37.549926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.646 [2024-11-28 13:06:37.549937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.646 [2024-11-28 13:06:37.549943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.646 [2024-11-28 13:06:37.549951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.646 [2024-11-28 13:06:37.549957] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.646 [2024-11-28 13:06:37.549962] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.646 [2024-11-28 13:06:37.549967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.646 [2024-11-28 13:06:37.559622] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:07.646 [2024-11-28 13:06:37.559633] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:07.646 [2024-11-28 13:06:37.559637] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:07.646 [2024-11-28 13:06:37.559642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:07.646 [2024-11-28 13:06:37.559655] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:07.646 [2024-11-28 13:06:37.559847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.646 [2024-11-28 13:06:37.559859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4610 with addr=10.0.0.2, port=4420 00:36:07.646 [2024-11-28 13:06:37.559869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4610 is same with the state(6) to be set 00:36:07.646 [2024-11-28 13:06:37.559880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4610 (9): Bad file descriptor 00:36:07.646 [2024-11-28 13:06:37.559891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:07.646 [2024-11-28 13:06:37.559897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:07.646 [2024-11-28 13:06:37.559904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:07.647 [2024-11-28 13:06:37.559910] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:07.647 [2024-11-28 13:06:37.559915] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:07.647 [2024-11-28 13:06:37.559919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:07.647 [2024-11-28 13:06:37.565240] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:36:07.647 [2024-11-28 13:06:37.565258] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:07.647 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:07.648 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.649 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.914 13:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 [2024-11-28 13:06:38.876779] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:08.858 [2024-11-28 13:06:38.876793] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:08.858 [2024-11-28 13:06:38.876801] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:09.119 [2024-11-28 13:06:39.004878] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:36:09.119 [2024-11-28 13:06:39.109416] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:36:09.119 [2024-11-28 13:06:39.110100] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xa0e060:1 started. 00:36:09.119 [2024-11-28 13:06:39.111391] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:09.119 [2024-11-28 13:06:39.111412] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.119 request: 00:36:09.119 { 00:36:09.119 "name": "nvme", 00:36:09.119 "trtype": "tcp", 00:36:09.119 "traddr": "10.0.0.2", 00:36:09.119 "adrfam": "ipv4", 00:36:09.119 "trsvcid": "8009", 00:36:09.119 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:09.119 "wait_for_attach": true, 00:36:09.119 "method": "bdev_nvme_start_discovery", 00:36:09.119 "req_id": 1 00:36:09.119 } 00:36:09.119 Got JSON-RPC error response 00:36:09.119 response: 00:36:09.119 { 00:36:09.119 "code": -17, 00:36:09.119 "message": "File exists" 00:36:09.119 } 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.119 [2024-11-28 13:06:39.155818] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xa0e060 was disconnected and freed. delete nvme_qpair. 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.119 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.380 request: 00:36:09.380 { 00:36:09.380 "name": "nvme_second", 00:36:09.380 "trtype": "tcp", 00:36:09.380 "traddr": "10.0.0.2", 00:36:09.380 "adrfam": "ipv4", 00:36:09.380 "trsvcid": "8009", 00:36:09.380 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:09.381 "wait_for_attach": true, 00:36:09.381 "method": "bdev_nvme_start_discovery", 00:36:09.381 "req_id": 1 00:36:09.381 } 00:36:09.381 Got JSON-RPC error response 00:36:09.381 response: 00:36:09.381 { 00:36:09.381 "code": -17, 00:36:09.381 "message": "File exists" 00:36:09.381 } 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.381 13:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.321 [2024-11-28 13:06:40.368227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.321 [2024-11-28 13:06:40.368258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0d520 with addr=10.0.0.2, port=8010 00:36:10.321 [2024-11-28 13:06:40.368271] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:10.321 [2024-11-28 13:06:40.368277] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:10.321 [2024-11-28 13:06:40.368282] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:11.262 [2024-11-28 13:06:41.368194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.262 [2024-11-28 13:06:41.368216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0d520 with addr=10.0.0.2, port=8010 00:36:11.262 [2024-11-28 13:06:41.368227] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:11.262 [2024-11-28 13:06:41.368232] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:11.262 [2024-11-28 13:06:41.368237] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:12.649 [2024-11-28 13:06:42.367964] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:36:12.649 request: 00:36:12.649 { 00:36:12.649 "name": "nvme_second", 00:36:12.649 "trtype": "tcp", 00:36:12.649 "traddr": "10.0.0.2", 00:36:12.649 "adrfam": "ipv4", 00:36:12.649 "trsvcid": "8010", 00:36:12.649 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:12.649 "wait_for_attach": false, 00:36:12.649 "attach_timeout_ms": 3000, 00:36:12.649 "method": "bdev_nvme_start_discovery", 00:36:12.649 "req_id": 1 00:36:12.649 } 00:36:12.649 Got JSON-RPC error response 00:36:12.649 response: 00:36:12.649 { 00:36:12.649 "code": -110, 00:36:12.649 "message": "Connection timed out" 00:36:12.649 } 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3614244 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.649 rmmod nvme_tcp 00:36:12.649 rmmod nvme_fabrics 00:36:12.649 rmmod nvme_keyring 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3613900 ']' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3613900 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3613900 ']' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3613900 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613900 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613900' 00:36:12.649 killing process with pid 3613900 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3613900 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3613900 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.649 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.197 00:36:15.197 real 0m20.047s 00:36:15.197 user 0m22.828s 00:36:15.197 sys 0m7.197s 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.197 ************************************ 00:36:15.197 END TEST nvmf_host_discovery 00:36:15.197 ************************************ 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.197 ************************************ 00:36:15.197 START TEST nvmf_host_multipath_status 00:36:15.197 ************************************ 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:15.197 * Looking for test storage... 00:36:15.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:36:15.197 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:15.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.197 --rc genhtml_branch_coverage=1 00:36:15.197 --rc genhtml_function_coverage=1 00:36:15.197 --rc genhtml_legend=1 00:36:15.197 --rc geninfo_all_blocks=1 00:36:15.197 --rc geninfo_unexecuted_blocks=1 00:36:15.197 00:36:15.197 ' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:15.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.197 --rc genhtml_branch_coverage=1 00:36:15.197 --rc genhtml_function_coverage=1 00:36:15.197 --rc genhtml_legend=1 00:36:15.197 --rc geninfo_all_blocks=1 00:36:15.197 --rc geninfo_unexecuted_blocks=1 00:36:15.197 00:36:15.197 ' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:15.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.197 --rc genhtml_branch_coverage=1 00:36:15.197 --rc genhtml_function_coverage=1 00:36:15.197 --rc genhtml_legend=1 00:36:15.197 --rc geninfo_all_blocks=1 00:36:15.197 --rc geninfo_unexecuted_blocks=1 00:36:15.197 00:36:15.197 ' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:15.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.197 --rc genhtml_branch_coverage=1 00:36:15.197 --rc genhtml_function_coverage=1 00:36:15.197 --rc genhtml_legend=1 00:36:15.197 --rc geninfo_all_blocks=1 00:36:15.197 --rc geninfo_unexecuted_blocks=1 00:36:15.197 00:36:15.197 ' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.197 13:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:36:23.341 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:23.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:23.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:23.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:23.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:23.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:36:23.342 00:36:23.342 --- 10.0.0.2 ping statistics --- 00:36:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.342 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:23.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:36:23.342 00:36:23.342 --- 10.0.0.1 ping statistics --- 00:36:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.342 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3620105 00:36:23.342 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3620105 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3620105 ']' 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.343 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.343 [2024-11-28 13:06:52.687362] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:36:23.343 [2024-11-28 13:06:52.687437] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.343 [2024-11-28 13:06:52.832845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:23.343 [2024-11-28 13:06:52.891783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:23.343 [2024-11-28 13:06:52.918639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.343 [2024-11-28 13:06:52.918684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.343 [2024-11-28 13:06:52.918692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.343 [2024-11-28 13:06:52.918699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.343 [2024-11-28 13:06:52.918705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.343 [2024-11-28 13:06:52.920205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.343 [2024-11-28 13:06:52.920264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3620105 00:36:23.604 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:23.604 [2024-11-28 13:06:53.711564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.865 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:23.865 Malloc0 00:36:23.865 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:36:24.125 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:24.386 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.646 [2024-11-28 13:06:54.525683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.646 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:24.646 [2024-11-28 13:06:54.717727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:24.646 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3620528 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3620528 /var/tmp/bdevperf.sock 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3620528 ']' 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:24.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.647 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:25.588 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.588 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:36:25.588 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:25.848 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:26.417 Nvme0n1 00:36:26.417 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:26.676 Nvme0n1 00:36:26.676 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:36:26.676 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:36:28.589 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:36:28.589 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:28.907 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:28.907 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:36:29.914 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:36:29.914 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:29.914 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:29.914 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:30.175 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.175 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:30.175 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.175 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.436 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:30.698 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.698 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:30.698 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.698 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:30.958 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.958 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:30.958 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.958 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:30.958 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.958 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:36:30.958 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:31.218 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:31.478 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:36:32.419 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:36:32.419 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:32.419 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:32.419 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:32.680 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:32.680 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:32.680 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:32.680 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:32.941 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:32.941 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:32.941 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:32.941 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:32.941 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:32.941 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:32.941 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:32.941 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:33.201 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:33.201 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:33.201 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:33.201 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:36:33.462 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:33.723 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:33.985 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:36:34.927 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:36:34.927 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:34.927 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:34.927 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:35.187 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:35.448 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:35.448 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:35.448 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:35.448 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:35.708 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:35.971 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:35.971 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:36:35.971 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:36.233 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:36.233 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:37.618 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:37.880 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:37.880 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:37.880 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:37.880 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:38.140 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:38.402 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:38.402 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:38.402 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:36:38.402 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:38.662 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:38.923 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:36:39.865 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:36:39.866 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:39.866 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.866 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.127 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:40.389 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:40.389 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:40.389 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.389 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.651 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:40.929 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:40.929 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:36:40.929 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:41.191 13:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:41.191 13:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:36:42.134 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:36:42.134 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:42.394 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.394 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:42.394 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:42.394 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:42.394 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.394 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:42.654 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.654 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:42.654 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.654 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:42.915 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.915 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:42.915 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.915 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:42.915 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.915 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:42.915 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.915 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:43.176 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:43.176 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:43.176 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:43.176 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:43.437 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:43.437 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:36:43.437 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:36:43.437 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:43.699 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:43.960 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:44.902 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:44.902 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:44.902 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:44.902 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:45.171 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.171 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:45.171 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.171 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:45.440 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.440 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:45.440 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.440 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:45.440 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.440 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:45.441 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:45.441 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.701 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.701 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:45.701 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.701 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:45.963 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.963 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:45.963 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.963 13:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:45.963 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.963 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:45.963 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:46.223 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:46.483 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:47.424 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:47.424 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:47.424 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.424 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.684 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:47.944 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.944 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:47.944 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.944 13:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:48.205 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.205 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:48.205 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.205 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:48.465 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:48.726 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:48.986 13:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:49.926 13:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:49.926 13:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:49.926 13:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:49.926 13:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.197 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:50.456 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.456 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:50.456 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.457 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.717 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:50.978 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.978 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:36:50.978 13:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:51.239 13:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:51.239 13:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.624 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:52.885 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:52.885 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:52.885 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.885 13:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:53.146 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:53.146 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:53.146 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.146 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3620528 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3620528 ']' 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3620528 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:53.407 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3620528 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3620528' 00:36:53.671 killing process with pid 3620528 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3620528 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3620528 00:36:53.671 { 00:36:53.671 "results": [ 00:36:53.671 { 00:36:53.671 "job": "Nvme0n1", 00:36:53.671 "core_mask": "0x4", 00:36:53.671 "workload": "verify", 00:36:53.671 "status": "terminated", 00:36:53.671 "verify_range": { 00:36:53.671 "start": 0, 00:36:53.671 "length": 16384 00:36:53.671 }, 00:36:53.671 "queue_depth": 128, 00:36:53.671 "io_size": 4096, 00:36:53.671 "runtime": 26.868509, 00:36:53.671 "iops": 11785.283656789441, 00:36:53.671 "mibps": 46.036264284333754, 00:36:53.671 "io_failed": 0, 00:36:53.671 "io_timeout": 0, 00:36:53.671 "avg_latency_us": 10841.229696344006, 00:36:53.671 "min_latency_us": 306.20781824256596, 00:36:53.671 "max_latency_us": 3012948.0788506516 00:36:53.671 } 00:36:53.671 ], 00:36:53.671 "core_count": 1 00:36:53.671 } 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3620528 00:36:53.671 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:53.671 [2024-11-28 13:06:54.806088] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:36:53.671 [2024-11-28 13:06:54.806189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620528 ] 00:36:53.671 [2024-11-28 13:06:54.943561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:53.671 [2024-11-28 13:06:55.002626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.671 [2024-11-28 13:06:55.030640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.671 Running I/O for 90 seconds... 00:36:53.671 10316.00 IOPS, 40.30 MiB/s [2024-11-28T12:07:23.798Z] 10612.50 IOPS, 41.46 MiB/s [2024-11-28T12:07:23.798Z] 10756.33 IOPS, 42.02 MiB/s [2024-11-28T12:07:23.798Z] 11141.75 IOPS, 43.52 MiB/s [2024-11-28T12:07:23.798Z] 11473.20 IOPS, 44.82 MiB/s [2024-11-28T12:07:23.798Z] 11679.67 IOPS, 45.62 MiB/s [2024-11-28T12:07:23.798Z] 11832.14 IOPS, 46.22 MiB/s [2024-11-28T12:07:23.798Z] 11943.50 IOPS, 46.65 MiB/s [2024-11-28T12:07:23.798Z] 12060.56 IOPS, 47.11 MiB/s [2024-11-28T12:07:23.798Z] 12144.80 IOPS, 47.44 MiB/s [2024-11-28T12:07:23.798Z] 12195.91 IOPS, 47.64 MiB/s [2024-11-28T12:07:23.798Z] [2024-11-28 13:07:08.608644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.671 [2024-11-28 13:07:08.608679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:53.671 [2024-11-28 13:07:08.608711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.671 [2024-11-28 13:07:08.608718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:53.671 [2024-11-28 13:07:08.608729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.671 [2024-11-28 13:07:08.608735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:53.671 [2024-11-28 13:07:08.608745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.671 [2024-11-28 13:07:08.608750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:53.671 [2024-11-28 13:07:08.608761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.671 [2024-11-28 13:07:08.608766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:53.671 [2024-11-28 13:07:08.608776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.671 [2024-11-28 13:07:08.608781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:53.671 [2024-11-28 13:07:08.608791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.608796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.608807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.608812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.609985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.609999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.672 [2024-11-28 13:07:08.610663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:53.672 [2024-11-28 13:07:08.610676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.610993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.610998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.673 [2024-11-28 13:07:08.611502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-11-28 13:07:08.611523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:53.673 [2024-11-28 13:07:08.611538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.673 [2024-11-28 13:07:08.611543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:08.611558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:08.611563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:08.611577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:08.611582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:08.611597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:08.611602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:08.611617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:08.611622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:08.611637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:08.611642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:53.674 12163.58 IOPS, 47.51 MiB/s [2024-11-28T12:07:23.801Z] 11227.92 IOPS, 43.86 MiB/s [2024-11-28T12:07:23.801Z] 10425.93 IOPS, 40.73 MiB/s [2024-11-28T12:07:23.801Z] 9792.20 IOPS, 38.25 MiB/s [2024-11-28T12:07:23.801Z] 9974.56 IOPS, 38.96 MiB/s [2024-11-28T12:07:23.801Z] 10137.18 IOPS, 39.60 MiB/s [2024-11-28T12:07:23.801Z] 10482.33 IOPS, 40.95 MiB/s [2024-11-28T12:07:23.801Z] 10811.37 IOPS, 42.23 MiB/s [2024-11-28T12:07:23.801Z] 11011.05 IOPS, 43.01 MiB/s [2024-11-28T12:07:23.801Z] 11092.90 IOPS, 43.33 MiB/s [2024-11-28T12:07:23.801Z] 11153.14 IOPS, 43.57 MiB/s [2024-11-28T12:07:23.801Z] 11358.22 IOPS, 44.37 MiB/s [2024-11-28T12:07:23.801Z] 11572.42 IOPS, 45.20 MiB/s [2024-11-28T12:07:23.801Z] [2024-11-28 13:07:21.316504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.316541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.316573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.316579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.316590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:21.316595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.316606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:21.316612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.317929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.317940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.317951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.317957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.317968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.317973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.317984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.317989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.317999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:21.318004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.674 [2024-11-28 13:07:21.318847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.318988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.318993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:53.674 [2024-11-28 13:07:21.319096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.674 [2024-11-28 13:07:21.319101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.675 [2024-11-28 13:07:21.319261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:53.675 [2024-11-28 13:07:21.319271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.675 [2024-11-28 13:07:21.319277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:53.675 11714.20 IOPS, 45.76 MiB/s [2024-11-28T12:07:23.802Z] 11759.96 IOPS, 45.94 MiB/s [2024-11-28T12:07:23.802Z] Received shutdown signal, test time was about 26.869118 seconds 00:36:53.675 00:36:53.675 Latency(us) 00:36:53.675 [2024-11-28T12:07:23.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.675 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:53.675 Verification LBA range: start 0x0 length 0x4000 00:36:53.675 Nvme0n1 : 26.87 11785.28 46.04 0.00 0.00 10841.23 306.21 3012948.08 00:36:53.675 [2024-11-28T12:07:23.802Z] =================================================================================================================== 00:36:53.675 [2024-11-28T12:07:23.802Z] Total : 11785.28 46.04 0.00 0.00 10841.23 306.21 3012948.08 00:36:53.675 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:53.936 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:53.936 rmmod nvme_tcp 00:36:53.936 rmmod nvme_fabrics 00:36:53.937 rmmod nvme_keyring 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3620105 ']' 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3620105 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3620105 ']' 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3620105 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3620105 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3620105' 00:36:53.937 killing process with pid 3620105 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3620105 00:36:53.937 13:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3620105 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.198 13:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:56.127 13:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:56.128 00:36:56.128 real 0m41.341s 00:36:56.128 user 1m46.463s 00:36:56.128 sys 0m11.717s 00:36:56.128 13:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.128 13:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:56.128 ************************************ 00:36:56.128 END TEST nvmf_host_multipath_status 00:36:56.128 ************************************ 00:36:56.128 13:07:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:56.128 13:07:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:56.128 13:07:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.128 13:07:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.390 ************************************ 00:36:56.390 START TEST nvmf_discovery_remove_ifc 00:36:56.390 ************************************ 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:56.390 * Looking for test storage... 00:36:56.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:56.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.390 --rc genhtml_branch_coverage=1 00:36:56.390 --rc genhtml_function_coverage=1 00:36:56.390 --rc genhtml_legend=1 00:36:56.390 --rc geninfo_all_blocks=1 00:36:56.390 --rc geninfo_unexecuted_blocks=1 00:36:56.390 00:36:56.390 ' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:56.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.390 --rc genhtml_branch_coverage=1 00:36:56.390 --rc genhtml_function_coverage=1 00:36:56.390 --rc genhtml_legend=1 00:36:56.390 --rc geninfo_all_blocks=1 00:36:56.390 --rc geninfo_unexecuted_blocks=1 00:36:56.390 00:36:56.390 ' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:56.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.390 --rc genhtml_branch_coverage=1 00:36:56.390 --rc genhtml_function_coverage=1 00:36:56.390 --rc genhtml_legend=1 00:36:56.390 --rc geninfo_all_blocks=1 00:36:56.390 --rc geninfo_unexecuted_blocks=1 00:36:56.390 00:36:56.390 ' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:56.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.390 --rc genhtml_branch_coverage=1 00:36:56.390 --rc genhtml_function_coverage=1 00:36:56.390 --rc genhtml_legend=1 00:36:56.390 --rc geninfo_all_blocks=1 00:36:56.390 --rc geninfo_unexecuted_blocks=1 00:36:56.390 00:36:56.390 ' 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:56.390 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:56.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:36:56.391 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:04.532 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:04.532 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.532 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:04.533 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:04.533 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:04.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:37:04.533 00:37:04.533 --- 10.0.0.2 ping statistics --- 00:37:04.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.533 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:04.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:37:04.533 00:37:04.533 --- 10.0.0.1 ping statistics --- 00:37:04.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.533 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:04.533 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3630364 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3630364 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3630364 ']' 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.533 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:04.533 [2024-11-28 13:07:34.074940] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:37:04.533 [2024-11-28 13:07:34.075008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.533 [2024-11-28 13:07:34.218605] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:04.533 [2024-11-28 13:07:34.276665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.533 [2024-11-28 13:07:34.302385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:04.533 [2024-11-28 13:07:34.302427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:04.533 [2024-11-28 13:07:34.302435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:04.533 [2024-11-28 13:07:34.302442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:04.533 [2024-11-28 13:07:34.302448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:04.533 [2024-11-28 13:07:34.303205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.795 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.795 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:37:04.795 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:04.795 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:04.795 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:05.064 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.064 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:37:05.064 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.064 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:05.064 [2024-11-28 13:07:34.945310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.064 [2024-11-28 13:07:34.953567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:05.064 null0 00:37:05.064 [2024-11-28 13:07:34.985435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3630688 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3630688 /tmp/host.sock 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3630688 ']' 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:05.064 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:05.064 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:05.064 [2024-11-28 13:07:35.074107] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:37:05.064 [2024-11-28 13:07:35.074183] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630688 ] 00:37:05.327 [2024-11-28 13:07:35.210991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:05.327 [2024-11-28 13:07:35.269195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.327 [2024-11-28 13:07:35.297227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.899 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:07.284 [2024-11-28 13:07:37.019354] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:07.284 [2024-11-28 13:07:37.019389] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:07.284 [2024-11-28 13:07:37.019412] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:07.284 [2024-11-28 13:07:37.107457] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:07.284 [2024-11-28 13:07:37.294354] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:37:07.284 [2024-11-28 13:07:37.295743] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x89b050:1 started. 00:37:07.284 [2024-11-28 13:07:37.297533] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:07.284 [2024-11-28 13:07:37.297598] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:07.284 [2024-11-28 13:07:37.297626] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:07.284 [2024-11-28 13:07:37.297645] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:07.284 [2024-11-28 13:07:37.297671] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:37:07.284 [2024-11-28 13:07:37.299833] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x89b050 was disconnected and freed. delete nvme_qpair. 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:37:07.284 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:07.546 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:08.488 13:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:09.872 13:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:10.815 13:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:11.757 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:11.758 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.758 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:11.758 13:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:12.698 [2024-11-28 13:07:42.734840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:37:12.698 [2024-11-28 13:07:42.734881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:12.698 [2024-11-28 13:07:42.734891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.698 [2024-11-28 13:07:42.734900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:12.698 [2024-11-28 13:07:42.734905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.698 [2024-11-28 13:07:42.734911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:12.698 [2024-11-28 13:07:42.734916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.698 [2024-11-28 13:07:42.734923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:12.698 [2024-11-28 13:07:42.734928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.698 [2024-11-28 13:07:42.734934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:12.698 [2024-11-28 13:07:42.734939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.698 [2024-11-28 13:07:42.734945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877890 is same with the state(6) to be set 00:37:12.698 [2024-11-28 13:07:42.744838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x877890 (9): Bad file descriptor 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:12.698 13:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:12.699 [2024-11-28 13:07:42.754848] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:12.699 [2024-11-28 13:07:42.754857] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:12.699 [2024-11-28 13:07:42.754861] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:12.699 [2024-11-28 13:07:42.754869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:12.699 [2024-11-28 13:07:42.754888] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:13.697 [2024-11-28 13:07:43.771301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:37:13.697 [2024-11-28 13:07:43.771393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x877890 with addr=10.0.0.2, port=4420 00:37:13.697 [2024-11-28 13:07:43.771436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877890 is same with the state(6) to be set 00:37:13.697 [2024-11-28 13:07:43.771491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x877890 (9): Bad file descriptor 00:37:13.697 [2024-11-28 13:07:43.772614] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:37:13.697 [2024-11-28 13:07:43.772685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:13.697 [2024-11-28 13:07:43.772708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:13.697 [2024-11-28 13:07:43.772732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:13.697 [2024-11-28 13:07:43.772753] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:13.697 [2024-11-28 13:07:43.772769] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:13.697 [2024-11-28 13:07:43.772783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:13.697 [2024-11-28 13:07:43.772806] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:13.697 [2024-11-28 13:07:43.772820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:13.697 13:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.697 13:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:13.697 13:07:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:14.673 [2024-11-28 13:07:44.772904] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:14.673 [2024-11-28 13:07:44.772920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:14.673 [2024-11-28 13:07:44.772929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:14.673 [2024-11-28 13:07:44.772935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:14.673 [2024-11-28 13:07:44.772940] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:37:14.673 [2024-11-28 13:07:44.772946] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:14.673 [2024-11-28 13:07:44.772949] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:14.673 [2024-11-28 13:07:44.772953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:14.673 [2024-11-28 13:07:44.772971] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:37:14.673 [2024-11-28 13:07:44.772989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.673 [2024-11-28 13:07:44.772996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.673 [2024-11-28 13:07:44.773004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.673 [2024-11-28 13:07:44.773010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.673 [2024-11-28 13:07:44.773016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.673 [2024-11-28 13:07:44.773021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.673 [2024-11-28 13:07:44.773030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.673 [2024-11-28 13:07:44.773035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.673 [2024-11-28 13:07:44.773041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.673 [2024-11-28 13:07:44.773046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.673 [2024-11-28 13:07:44.773052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:37:14.673 [2024-11-28 13:07:44.773474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x866f90 (9): Bad file descriptor 00:37:14.673 [2024-11-28 13:07:44.774482] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:37:14.673 [2024-11-28 13:07:44.774491] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:14.935 13:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:15.875 13:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:15.875 13:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:16.135 13:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:16.135 13:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.135 13:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:16.135 13:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:16.135 13:07:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:16.135 13:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.135 13:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:16.135 13:07:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:16.706 [2024-11-28 13:07:46.821054] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:16.706 [2024-11-28 13:07:46.821068] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:16.706 [2024-11-28 13:07:46.821078] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:16.967 [2024-11-28 13:07:46.952183] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:16.967 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.227 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:17.227 13:07:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:17.227 [2024-11-28 13:07:47.172927] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:37:17.227 [2024-11-28 13:07:47.173623] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x850bd0:1 started. 00:37:17.227 [2024-11-28 13:07:47.174521] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:17.227 [2024-11-28 13:07:47.174550] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:17.227 [2024-11-28 13:07:47.174565] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:17.227 [2024-11-28 13:07:47.174577] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:37:17.227 [2024-11-28 13:07:47.174583] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:17.227 [2024-11-28 13:07:47.178577] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x850bd0 was disconnected and freed. delete nvme_qpair. 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:18.168 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3630688 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3630688 ']' 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3630688 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3630688 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3630688' 00:37:18.169 killing process with pid 3630688 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3630688 00:37:18.169 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3630688 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.430 rmmod nvme_tcp 00:37:18.430 rmmod nvme_fabrics 00:37:18.430 rmmod nvme_keyring 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3630364 ']' 00:37:18.430 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3630364 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3630364 ']' 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3630364 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3630364 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3630364' 00:37:18.431 killing process with pid 3630364 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3630364 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3630364 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.431 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.692 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.692 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.692 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.692 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.692 13:07:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.606 00:37:20.606 real 0m24.385s 00:37:20.606 user 0m29.223s 00:37:20.606 sys 0m7.141s 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:20.606 ************************************ 00:37:20.606 END TEST nvmf_discovery_remove_ifc 00:37:20.606 ************************************ 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.606 ************************************ 00:37:20.606 START TEST nvmf_identify_kernel_target 00:37:20.606 ************************************ 00:37:20.606 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:20.868 * Looking for test storage... 00:37:20.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:20.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.868 --rc genhtml_branch_coverage=1 00:37:20.868 --rc genhtml_function_coverage=1 00:37:20.868 --rc genhtml_legend=1 00:37:20.868 --rc geninfo_all_blocks=1 00:37:20.868 --rc geninfo_unexecuted_blocks=1 00:37:20.868 00:37:20.868 ' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:20.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.868 --rc genhtml_branch_coverage=1 00:37:20.868 --rc genhtml_function_coverage=1 00:37:20.868 --rc genhtml_legend=1 00:37:20.868 --rc geninfo_all_blocks=1 00:37:20.868 --rc geninfo_unexecuted_blocks=1 00:37:20.868 00:37:20.868 ' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:20.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.868 --rc genhtml_branch_coverage=1 00:37:20.868 --rc genhtml_function_coverage=1 00:37:20.868 --rc genhtml_legend=1 00:37:20.868 --rc geninfo_all_blocks=1 00:37:20.868 --rc geninfo_unexecuted_blocks=1 00:37:20.868 00:37:20.868 ' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:20.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.868 --rc genhtml_branch_coverage=1 00:37:20.868 --rc genhtml_function_coverage=1 00:37:20.868 --rc genhtml_legend=1 00:37:20.868 --rc geninfo_all_blocks=1 00:37:20.868 --rc geninfo_unexecuted_blocks=1 00:37:20.868 00:37:20.868 ' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.868 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:20.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.869 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.011 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:29.012 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:29.012 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:29.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:29.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:37:29.012 00:37:29.012 --- 10.0.0.2 ping statistics --- 00:37:29.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.012 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:37:29.012 00:37:29.012 --- 10.0.0.1 ping statistics --- 00:37:29.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.012 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:29.012 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:29.013 13:07:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:32.311 Waiting for block devices as requested 00:37:32.311 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:32.311 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:32.311 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:32.311 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:32.311 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:32.311 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:32.572 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:32.572 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:32.572 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:32.833 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:32.833 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:33.094 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:33.094 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:33.094 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:33.355 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:33.355 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:33.355 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:33.616 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:33.877 No valid GPT data, bailing 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:33.877 00:37:33.877 Discovery Log Number of Records 2, Generation counter 2 00:37:33.877 =====Discovery Log Entry 0====== 00:37:33.877 trtype: tcp 00:37:33.877 adrfam: ipv4 00:37:33.877 subtype: current discovery subsystem 00:37:33.877 treq: not specified, sq flow control disable supported 00:37:33.877 portid: 1 00:37:33.877 trsvcid: 4420 00:37:33.877 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:33.877 traddr: 10.0.0.1 00:37:33.877 eflags: none 00:37:33.877 sectype: none 00:37:33.877 =====Discovery Log Entry 1====== 00:37:33.877 trtype: tcp 00:37:33.877 adrfam: ipv4 00:37:33.877 subtype: nvme subsystem 00:37:33.877 treq: not specified, sq flow control disable supported 00:37:33.877 portid: 1 00:37:33.877 trsvcid: 4420 00:37:33.877 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:33.877 traddr: 10.0.0.1 00:37:33.877 eflags: none 00:37:33.877 sectype: none 00:37:33.877 13:08:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:37:33.877 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:37:34.139 ===================================================== 00:37:34.139 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:37:34.139 ===================================================== 00:37:34.139 Controller Capabilities/Features 00:37:34.139 ================================ 00:37:34.139 Vendor ID: 0000 00:37:34.139 Subsystem Vendor ID: 0000 00:37:34.139 Serial Number: 58087904b3633a4b0b29 00:37:34.139 Model Number: Linux 00:37:34.139 Firmware Version: 6.8.9-20 00:37:34.139 Recommended Arb Burst: 0 00:37:34.139 IEEE OUI Identifier: 00 00 00 00:37:34.139 Multi-path I/O 00:37:34.139 May have multiple subsystem ports: No 00:37:34.139 May have multiple controllers: No 00:37:34.139 Associated with SR-IOV VF: No 00:37:34.139 Max Data Transfer Size: Unlimited 00:37:34.139 Max Number of Namespaces: 0 00:37:34.139 Max Number of I/O Queues: 1024 00:37:34.139 NVMe Specification Version (VS): 1.3 00:37:34.139 NVMe Specification Version (Identify): 1.3 00:37:34.139 Maximum Queue Entries: 1024 00:37:34.139 Contiguous Queues Required: No 00:37:34.139 Arbitration Mechanisms Supported 00:37:34.139 Weighted Round Robin: Not Supported 00:37:34.139 Vendor Specific: Not Supported 00:37:34.139 Reset Timeout: 7500 ms 00:37:34.139 Doorbell Stride: 4 bytes 00:37:34.139 NVM Subsystem Reset: Not Supported 00:37:34.139 Command Sets Supported 00:37:34.139 NVM Command Set: Supported 00:37:34.139 Boot Partition: Not Supported 00:37:34.139 Memory Page Size Minimum: 4096 bytes 00:37:34.139 Memory Page Size Maximum: 4096 bytes 00:37:34.139 Persistent Memory Region: Not Supported 00:37:34.139 Optional Asynchronous Events Supported 00:37:34.139 Namespace Attribute Notices: Not Supported 00:37:34.139 Firmware Activation Notices: Not Supported 00:37:34.139 ANA Change Notices: Not Supported 00:37:34.139 PLE Aggregate Log Change Notices: Not Supported 00:37:34.139 LBA Status Info Alert Notices: Not Supported 00:37:34.139 EGE Aggregate Log Change Notices: Not Supported 00:37:34.139 Normal NVM Subsystem Shutdown event: Not Supported 00:37:34.139 Zone Descriptor Change Notices: Not Supported 00:37:34.139 Discovery Log Change Notices: Supported 00:37:34.139 Controller Attributes 00:37:34.139 128-bit Host Identifier: Not Supported 00:37:34.139 Non-Operational Permissive Mode: Not Supported 00:37:34.139 NVM Sets: Not Supported 00:37:34.139 Read Recovery Levels: Not Supported 00:37:34.139 Endurance Groups: Not Supported 00:37:34.139 Predictable Latency Mode: Not Supported 00:37:34.139 Traffic Based Keep ALive: Not Supported 00:37:34.139 Namespace Granularity: Not Supported 00:37:34.139 SQ Associations: Not Supported 00:37:34.139 UUID List: Not Supported 00:37:34.139 Multi-Domain Subsystem: Not Supported 00:37:34.139 Fixed Capacity Management: Not Supported 00:37:34.139 Variable Capacity Management: Not Supported 00:37:34.139 Delete Endurance Group: Not Supported 00:37:34.139 Delete NVM Set: Not Supported 00:37:34.139 Extended LBA Formats Supported: Not Supported 00:37:34.139 Flexible Data Placement Supported: Not Supported 00:37:34.139 00:37:34.139 Controller Memory Buffer Support 00:37:34.139 ================================ 00:37:34.139 Supported: No 00:37:34.139 00:37:34.139 Persistent Memory Region Support 00:37:34.139 ================================ 00:37:34.139 Supported: No 00:37:34.139 00:37:34.139 Admin Command Set Attributes 00:37:34.139 ============================ 00:37:34.139 Security Send/Receive: Not Supported 00:37:34.139 Format NVM: Not Supported 00:37:34.139 Firmware Activate/Download: Not Supported 00:37:34.139 Namespace Management: Not Supported 00:37:34.139 Device Self-Test: Not Supported 00:37:34.139 Directives: Not Supported 00:37:34.139 NVMe-MI: Not Supported 00:37:34.139 Virtualization Management: Not Supported 00:37:34.139 Doorbell Buffer Config: Not Supported 00:37:34.139 Get LBA Status Capability: Not Supported 00:37:34.140 Command & Feature Lockdown Capability: Not Supported 00:37:34.140 Abort Command Limit: 1 00:37:34.140 Async Event Request Limit: 1 00:37:34.140 Number of Firmware Slots: N/A 00:37:34.140 Firmware Slot 1 Read-Only: N/A 00:37:34.140 Firmware Activation Without Reset: N/A 00:37:34.140 Multiple Update Detection Support: N/A 00:37:34.140 Firmware Update Granularity: No Information Provided 00:37:34.140 Per-Namespace SMART Log: No 00:37:34.140 Asymmetric Namespace Access Log Page: Not Supported 00:37:34.140 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:37:34.140 Command Effects Log Page: Not Supported 00:37:34.140 Get Log Page Extended Data: Supported 00:37:34.140 Telemetry Log Pages: Not Supported 00:37:34.140 Persistent Event Log Pages: Not Supported 00:37:34.140 Supported Log Pages Log Page: May Support 00:37:34.140 Commands Supported & Effects Log Page: Not Supported 00:37:34.140 Feature Identifiers & Effects Log Page:May Support 00:37:34.140 NVMe-MI Commands & Effects Log Page: May Support 00:37:34.140 Data Area 4 for Telemetry Log: Not Supported 00:37:34.140 Error Log Page Entries Supported: 1 00:37:34.140 Keep Alive: Not Supported 00:37:34.140 00:37:34.140 NVM Command Set Attributes 00:37:34.140 ========================== 00:37:34.140 Submission Queue Entry Size 00:37:34.140 Max: 1 00:37:34.140 Min: 1 00:37:34.140 Completion Queue Entry Size 00:37:34.140 Max: 1 00:37:34.140 Min: 1 00:37:34.140 Number of Namespaces: 0 00:37:34.140 Compare Command: Not Supported 00:37:34.140 Write Uncorrectable Command: Not Supported 00:37:34.140 Dataset Management Command: Not Supported 00:37:34.140 Write Zeroes Command: Not Supported 00:37:34.140 Set Features Save Field: Not Supported 00:37:34.140 Reservations: Not Supported 00:37:34.140 Timestamp: Not Supported 00:37:34.140 Copy: Not Supported 00:37:34.140 Volatile Write Cache: Not Present 00:37:34.140 Atomic Write Unit (Normal): 1 00:37:34.140 Atomic Write Unit (PFail): 1 00:37:34.140 Atomic Compare & Write Unit: 1 00:37:34.140 Fused Compare & Write: Not Supported 00:37:34.140 Scatter-Gather List 00:37:34.140 SGL Command Set: Supported 00:37:34.140 SGL Keyed: Not Supported 00:37:34.140 SGL Bit Bucket Descriptor: Not Supported 00:37:34.140 SGL Metadata Pointer: Not Supported 00:37:34.140 Oversized SGL: Not Supported 00:37:34.140 SGL Metadata Address: Not Supported 00:37:34.140 SGL Offset: Supported 00:37:34.140 Transport SGL Data Block: Not Supported 00:37:34.140 Replay Protected Memory Block: Not Supported 00:37:34.140 00:37:34.140 Firmware Slot Information 00:37:34.140 ========================= 00:37:34.140 Active slot: 0 00:37:34.140 00:37:34.140 00:37:34.140 Error Log 00:37:34.140 ========= 00:37:34.140 00:37:34.140 Active Namespaces 00:37:34.140 ================= 00:37:34.140 Discovery Log Page 00:37:34.140 ================== 00:37:34.140 Generation Counter: 2 00:37:34.140 Number of Records: 2 00:37:34.140 Record Format: 0 00:37:34.140 00:37:34.140 Discovery Log Entry 0 00:37:34.140 ---------------------- 00:37:34.140 Transport Type: 3 (TCP) 00:37:34.140 Address Family: 1 (IPv4) 00:37:34.140 Subsystem Type: 3 (Current Discovery Subsystem) 00:37:34.140 Entry Flags: 00:37:34.140 Duplicate Returned Information: 0 00:37:34.140 Explicit Persistent Connection Support for Discovery: 0 00:37:34.140 Transport Requirements: 00:37:34.140 Secure Channel: Not Specified 00:37:34.140 Port ID: 1 (0x0001) 00:37:34.140 Controller ID: 65535 (0xffff) 00:37:34.140 Admin Max SQ Size: 32 00:37:34.140 Transport Service Identifier: 4420 00:37:34.140 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:37:34.140 Transport Address: 10.0.0.1 00:37:34.140 Discovery Log Entry 1 00:37:34.140 ---------------------- 00:37:34.140 Transport Type: 3 (TCP) 00:37:34.140 Address Family: 1 (IPv4) 00:37:34.140 Subsystem Type: 2 (NVM Subsystem) 00:37:34.140 Entry Flags: 00:37:34.140 Duplicate Returned Information: 0 00:37:34.140 Explicit Persistent Connection Support for Discovery: 0 00:37:34.140 Transport Requirements: 00:37:34.140 Secure Channel: Not Specified 00:37:34.140 Port ID: 1 (0x0001) 00:37:34.140 Controller ID: 65535 (0xffff) 00:37:34.140 Admin Max SQ Size: 32 00:37:34.140 Transport Service Identifier: 4420 00:37:34.140 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:37:34.140 Transport Address: 10.0.0.1 00:37:34.140 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:34.402 get_feature(0x01) failed 00:37:34.402 get_feature(0x02) failed 00:37:34.402 get_feature(0x04) failed 00:37:34.402 ===================================================== 00:37:34.402 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:34.402 ===================================================== 00:37:34.402 Controller Capabilities/Features 00:37:34.402 ================================ 00:37:34.402 Vendor ID: 0000 00:37:34.402 Subsystem Vendor ID: 0000 00:37:34.402 Serial Number: 04a9f736d4267e14e503 00:37:34.402 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:37:34.402 Firmware Version: 6.8.9-20 00:37:34.402 Recommended Arb Burst: 6 00:37:34.402 IEEE OUI Identifier: 00 00 00 00:37:34.402 Multi-path I/O 00:37:34.402 May have multiple subsystem ports: Yes 00:37:34.402 May have multiple controllers: Yes 00:37:34.402 Associated with SR-IOV VF: No 00:37:34.402 Max Data Transfer Size: Unlimited 00:37:34.402 Max Number of Namespaces: 1024 00:37:34.402 Max Number of I/O Queues: 128 00:37:34.402 NVMe Specification Version (VS): 1.3 00:37:34.402 NVMe Specification Version (Identify): 1.3 00:37:34.402 Maximum Queue Entries: 1024 00:37:34.402 Contiguous Queues Required: No 00:37:34.402 Arbitration Mechanisms Supported 00:37:34.402 Weighted Round Robin: Not Supported 00:37:34.402 Vendor Specific: Not Supported 00:37:34.402 Reset Timeout: 7500 ms 00:37:34.402 Doorbell Stride: 4 bytes 00:37:34.402 NVM Subsystem Reset: Not Supported 00:37:34.402 Command Sets Supported 00:37:34.402 NVM Command Set: Supported 00:37:34.402 Boot Partition: Not Supported 00:37:34.402 Memory Page Size Minimum: 4096 bytes 00:37:34.402 Memory Page Size Maximum: 4096 bytes 00:37:34.402 Persistent Memory Region: Not Supported 00:37:34.402 Optional Asynchronous Events Supported 00:37:34.402 Namespace Attribute Notices: Supported 00:37:34.402 Firmware Activation Notices: Not Supported 00:37:34.402 ANA Change Notices: Supported 00:37:34.402 PLE Aggregate Log Change Notices: Not Supported 00:37:34.402 LBA Status Info Alert Notices: Not Supported 00:37:34.402 EGE Aggregate Log Change Notices: Not Supported 00:37:34.402 Normal NVM Subsystem Shutdown event: Not Supported 00:37:34.402 Zone Descriptor Change Notices: Not Supported 00:37:34.402 Discovery Log Change Notices: Not Supported 00:37:34.402 Controller Attributes 00:37:34.402 128-bit Host Identifier: Supported 00:37:34.402 Non-Operational Permissive Mode: Not Supported 00:37:34.402 NVM Sets: Not Supported 00:37:34.402 Read Recovery Levels: Not Supported 00:37:34.402 Endurance Groups: Not Supported 00:37:34.402 Predictable Latency Mode: Not Supported 00:37:34.402 Traffic Based Keep ALive: Supported 00:37:34.402 Namespace Granularity: Not Supported 00:37:34.402 SQ Associations: Not Supported 00:37:34.402 UUID List: Not Supported 00:37:34.402 Multi-Domain Subsystem: Not Supported 00:37:34.402 Fixed Capacity Management: Not Supported 00:37:34.402 Variable Capacity Management: Not Supported 00:37:34.403 Delete Endurance Group: Not Supported 00:37:34.403 Delete NVM Set: Not Supported 00:37:34.403 Extended LBA Formats Supported: Not Supported 00:37:34.403 Flexible Data Placement Supported: Not Supported 00:37:34.403 00:37:34.403 Controller Memory Buffer Support 00:37:34.403 ================================ 00:37:34.403 Supported: No 00:37:34.403 00:37:34.403 Persistent Memory Region Support 00:37:34.403 ================================ 00:37:34.403 Supported: No 00:37:34.403 00:37:34.403 Admin Command Set Attributes 00:37:34.403 ============================ 00:37:34.403 Security Send/Receive: Not Supported 00:37:34.403 Format NVM: Not Supported 00:37:34.403 Firmware Activate/Download: Not Supported 00:37:34.403 Namespace Management: Not Supported 00:37:34.403 Device Self-Test: Not Supported 00:37:34.403 Directives: Not Supported 00:37:34.403 NVMe-MI: Not Supported 00:37:34.403 Virtualization Management: Not Supported 00:37:34.403 Doorbell Buffer Config: Not Supported 00:37:34.403 Get LBA Status Capability: Not Supported 00:37:34.403 Command & Feature Lockdown Capability: Not Supported 00:37:34.403 Abort Command Limit: 4 00:37:34.403 Async Event Request Limit: 4 00:37:34.403 Number of Firmware Slots: N/A 00:37:34.403 Firmware Slot 1 Read-Only: N/A 00:37:34.403 Firmware Activation Without Reset: N/A 00:37:34.403 Multiple Update Detection Support: N/A 00:37:34.403 Firmware Update Granularity: No Information Provided 00:37:34.403 Per-Namespace SMART Log: Yes 00:37:34.403 Asymmetric Namespace Access Log Page: Supported 00:37:34.403 ANA Transition Time : 10 sec 00:37:34.403 00:37:34.403 Asymmetric Namespace Access Capabilities 00:37:34.403 ANA Optimized State : Supported 00:37:34.403 ANA Non-Optimized State : Supported 00:37:34.403 ANA Inaccessible State : Supported 00:37:34.403 ANA Persistent Loss State : Supported 00:37:34.403 ANA Change State : Supported 00:37:34.403 ANAGRPID is not changed : No 00:37:34.403 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:37:34.403 00:37:34.403 ANA Group Identifier Maximum : 128 00:37:34.403 Number of ANA Group Identifiers : 128 00:37:34.403 Max Number of Allowed Namespaces : 1024 00:37:34.403 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:37:34.403 Command Effects Log Page: Supported 00:37:34.403 Get Log Page Extended Data: Supported 00:37:34.403 Telemetry Log Pages: Not Supported 00:37:34.403 Persistent Event Log Pages: Not Supported 00:37:34.403 Supported Log Pages Log Page: May Support 00:37:34.403 Commands Supported & Effects Log Page: Not Supported 00:37:34.403 Feature Identifiers & Effects Log Page:May Support 00:37:34.403 NVMe-MI Commands & Effects Log Page: May Support 00:37:34.403 Data Area 4 for Telemetry Log: Not Supported 00:37:34.403 Error Log Page Entries Supported: 128 00:37:34.403 Keep Alive: Supported 00:37:34.403 Keep Alive Granularity: 1000 ms 00:37:34.403 00:37:34.403 NVM Command Set Attributes 00:37:34.403 ========================== 00:37:34.403 Submission Queue Entry Size 00:37:34.403 Max: 64 00:37:34.403 Min: 64 00:37:34.403 Completion Queue Entry Size 00:37:34.403 Max: 16 00:37:34.403 Min: 16 00:37:34.403 Number of Namespaces: 1024 00:37:34.403 Compare Command: Not Supported 00:37:34.403 Write Uncorrectable Command: Not Supported 00:37:34.403 Dataset Management Command: Supported 00:37:34.403 Write Zeroes Command: Supported 00:37:34.403 Set Features Save Field: Not Supported 00:37:34.403 Reservations: Not Supported 00:37:34.403 Timestamp: Not Supported 00:37:34.403 Copy: Not Supported 00:37:34.403 Volatile Write Cache: Present 00:37:34.403 Atomic Write Unit (Normal): 1 00:37:34.403 Atomic Write Unit (PFail): 1 00:37:34.403 Atomic Compare & Write Unit: 1 00:37:34.403 Fused Compare & Write: Not Supported 00:37:34.403 Scatter-Gather List 00:37:34.403 SGL Command Set: Supported 00:37:34.403 SGL Keyed: Not Supported 00:37:34.403 SGL Bit Bucket Descriptor: Not Supported 00:37:34.403 SGL Metadata Pointer: Not Supported 00:37:34.403 Oversized SGL: Not Supported 00:37:34.403 SGL Metadata Address: Not Supported 00:37:34.403 SGL Offset: Supported 00:37:34.403 Transport SGL Data Block: Not Supported 00:37:34.403 Replay Protected Memory Block: Not Supported 00:37:34.403 00:37:34.403 Firmware Slot Information 00:37:34.403 ========================= 00:37:34.403 Active slot: 0 00:37:34.403 00:37:34.403 Asymmetric Namespace Access 00:37:34.403 =========================== 00:37:34.403 Change Count : 0 00:37:34.403 Number of ANA Group Descriptors : 1 00:37:34.403 ANA Group Descriptor : 0 00:37:34.403 ANA Group ID : 1 00:37:34.403 Number of NSID Values : 1 00:37:34.403 Change Count : 0 00:37:34.403 ANA State : 1 00:37:34.403 Namespace Identifier : 1 00:37:34.403 00:37:34.403 Commands Supported and Effects 00:37:34.403 ============================== 00:37:34.403 Admin Commands 00:37:34.403 -------------- 00:37:34.403 Get Log Page (02h): Supported 00:37:34.403 Identify (06h): Supported 00:37:34.403 Abort (08h): Supported 00:37:34.403 Set Features (09h): Supported 00:37:34.403 Get Features (0Ah): Supported 00:37:34.403 Asynchronous Event Request (0Ch): Supported 00:37:34.403 Keep Alive (18h): Supported 00:37:34.403 I/O Commands 00:37:34.403 ------------ 00:37:34.403 Flush (00h): Supported 00:37:34.403 Write (01h): Supported LBA-Change 00:37:34.403 Read (02h): Supported 00:37:34.403 Write Zeroes (08h): Supported LBA-Change 00:37:34.403 Dataset Management (09h): Supported 00:37:34.403 00:37:34.403 Error Log 00:37:34.403 ========= 00:37:34.403 Entry: 0 00:37:34.403 Error Count: 0x3 00:37:34.403 Submission Queue Id: 0x0 00:37:34.403 Command Id: 0x5 00:37:34.403 Phase Bit: 0 00:37:34.403 Status Code: 0x2 00:37:34.403 Status Code Type: 0x0 00:37:34.403 Do Not Retry: 1 00:37:34.403 Error Location: 0x28 00:37:34.403 LBA: 0x0 00:37:34.403 Namespace: 0x0 00:37:34.403 Vendor Log Page: 0x0 00:37:34.403 ----------- 00:37:34.403 Entry: 1 00:37:34.403 Error Count: 0x2 00:37:34.403 Submission Queue Id: 0x0 00:37:34.403 Command Id: 0x5 00:37:34.403 Phase Bit: 0 00:37:34.403 Status Code: 0x2 00:37:34.403 Status Code Type: 0x0 00:37:34.403 Do Not Retry: 1 00:37:34.403 Error Location: 0x28 00:37:34.403 LBA: 0x0 00:37:34.403 Namespace: 0x0 00:37:34.403 Vendor Log Page: 0x0 00:37:34.403 ----------- 00:37:34.403 Entry: 2 00:37:34.403 Error Count: 0x1 00:37:34.403 Submission Queue Id: 0x0 00:37:34.403 Command Id: 0x4 00:37:34.403 Phase Bit: 0 00:37:34.403 Status Code: 0x2 00:37:34.403 Status Code Type: 0x0 00:37:34.403 Do Not Retry: 1 00:37:34.403 Error Location: 0x28 00:37:34.403 LBA: 0x0 00:37:34.403 Namespace: 0x0 00:37:34.403 Vendor Log Page: 0x0 00:37:34.403 00:37:34.403 Number of Queues 00:37:34.403 ================ 00:37:34.403 Number of I/O Submission Queues: 128 00:37:34.403 Number of I/O Completion Queues: 128 00:37:34.403 00:37:34.403 ZNS Specific Controller Data 00:37:34.403 ============================ 00:37:34.403 Zone Append Size Limit: 0 00:37:34.403 00:37:34.403 00:37:34.403 Active Namespaces 00:37:34.403 ================= 00:37:34.403 get_feature(0x05) failed 00:37:34.403 Namespace ID:1 00:37:34.403 Command Set Identifier: NVM (00h) 00:37:34.403 Deallocate: Supported 00:37:34.403 Deallocated/Unwritten Error: Not Supported 00:37:34.403 Deallocated Read Value: Unknown 00:37:34.403 Deallocate in Write Zeroes: Not Supported 00:37:34.403 Deallocated Guard Field: 0xFFFF 00:37:34.403 Flush: Supported 00:37:34.403 Reservation: Not Supported 00:37:34.403 Namespace Sharing Capabilities: Multiple Controllers 00:37:34.403 Size (in LBAs): 3750748848 (1788GiB) 00:37:34.403 Capacity (in LBAs): 3750748848 (1788GiB) 00:37:34.403 Utilization (in LBAs): 3750748848 (1788GiB) 00:37:34.403 UUID: d46cfb77-4f14-4ded-95c8-55a29901468b 00:37:34.403 Thin Provisioning: Not Supported 00:37:34.403 Per-NS Atomic Units: Yes 00:37:34.403 Atomic Write Unit (Normal): 8 00:37:34.403 Atomic Write Unit (PFail): 8 00:37:34.403 Preferred Write Granularity: 8 00:37:34.403 Atomic Compare & Write Unit: 8 00:37:34.403 Atomic Boundary Size (Normal): 0 00:37:34.403 Atomic Boundary Size (PFail): 0 00:37:34.403 Atomic Boundary Offset: 0 00:37:34.403 NGUID/EUI64 Never Reused: No 00:37:34.403 ANA group ID: 1 00:37:34.403 Namespace Write Protected: No 00:37:34.403 Number of LBA Formats: 1 00:37:34.403 Current LBA Format: LBA Format #00 00:37:34.403 LBA Format #00: Data Size: 512 Metadata Size: 0 00:37:34.403 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:34.404 rmmod nvme_tcp 00:37:34.404 rmmod nvme_fabrics 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.404 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:36.951 13:08:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:40.253 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:40.253 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:40.825 00:37:40.825 real 0m19.941s 00:37:40.825 user 0m5.354s 00:37:40.825 sys 0m11.414s 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:40.825 ************************************ 00:37:40.825 END TEST nvmf_identify_kernel_target 00:37:40.825 ************************************ 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.825 ************************************ 00:37:40.825 START TEST nvmf_auth_host 00:37:40.825 ************************************ 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:40.825 * Looking for test storage... 00:37:40.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:37:40.825 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.826 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:40.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.826 --rc genhtml_branch_coverage=1 00:37:40.826 --rc genhtml_function_coverage=1 00:37:40.826 --rc genhtml_legend=1 00:37:40.826 --rc geninfo_all_blocks=1 00:37:40.826 --rc geninfo_unexecuted_blocks=1 00:37:40.826 00:37:40.826 ' 00:37:40.826 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:40.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.826 --rc genhtml_branch_coverage=1 00:37:40.826 --rc genhtml_function_coverage=1 00:37:40.826 --rc genhtml_legend=1 00:37:40.826 --rc geninfo_all_blocks=1 00:37:40.826 --rc geninfo_unexecuted_blocks=1 00:37:40.826 00:37:40.826 ' 00:37:40.826 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:40.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.826 --rc genhtml_branch_coverage=1 00:37:40.826 --rc genhtml_function_coverage=1 00:37:40.826 --rc genhtml_legend=1 00:37:40.826 --rc geninfo_all_blocks=1 00:37:40.826 --rc geninfo_unexecuted_blocks=1 00:37:40.826 00:37:40.826 ' 00:37:40.826 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:40.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.826 --rc genhtml_branch_coverage=1 00:37:40.826 --rc genhtml_function_coverage=1 00:37:40.826 --rc genhtml_legend=1 00:37:40.826 --rc geninfo_all_blocks=1 00:37:40.826 --rc geninfo_unexecuted_blocks=1 00:37:40.826 00:37:40.826 ' 00:37:40.826 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:41.086 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:41.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:37:41.087 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:49.226 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:49.227 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:49.227 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:49.227 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:49.227 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:49.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:49.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:37:49.227 00:37:49.227 --- 10.0.0.2 ping statistics --- 00:37:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.227 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:49.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:49.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:37:49.227 00:37:49.227 --- 10.0.0.1 ping statistics --- 00:37:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.227 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3644900 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3644900 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:49.227 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3644900 ']' 00:37:49.228 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.228 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:49.228 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.228 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:49.228 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.489 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.489 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:37:49.489 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:49.489 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.489 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.489 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=512e03e30064e784d8dff4a01e4b9bec 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mSO 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 512e03e30064e784d8dff4a01e4b9bec 0 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 512e03e30064e784d8dff4a01e4b9bec 0 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=512e03e30064e784d8dff4a01e4b9bec 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mSO 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mSO 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.mSO 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=44cc22b0edf053e1b0fdb398d96764d7a7ba408c1c9b66fddc798541125893d5 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.05X 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 44cc22b0edf053e1b0fdb398d96764d7a7ba408c1c9b66fddc798541125893d5 3 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 44cc22b0edf053e1b0fdb398d96764d7a7ba408c1c9b66fddc798541125893d5 3 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=44cc22b0edf053e1b0fdb398d96764d7a7ba408c1c9b66fddc798541125893d5 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.05X 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.05X 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.05X 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d326b8dd51e51ccf4d17084b1ab34d92901cc65f79a799d4 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Cid 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d326b8dd51e51ccf4d17084b1ab34d92901cc65f79a799d4 0 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d326b8dd51e51ccf4d17084b1ab34d92901cc65f79a799d4 0 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d326b8dd51e51ccf4d17084b1ab34d92901cc65f79a799d4 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:37:49.490 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Cid 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Cid 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Cid 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.751 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb625a8a45e498c560577cf9eef3e4aee250c14e3b8bf511 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kJ1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb625a8a45e498c560577cf9eef3e4aee250c14e3b8bf511 2 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb625a8a45e498c560577cf9eef3e4aee250c14e3b8bf511 2 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb625a8a45e498c560577cf9eef3e4aee250c14e3b8bf511 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kJ1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kJ1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.kJ1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cf4e1acba5be1f16946839e2acff500b 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.j1I 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cf4e1acba5be1f16946839e2acff500b 1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cf4e1acba5be1f16946839e2acff500b 1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cf4e1acba5be1f16946839e2acff500b 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.j1I 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.j1I 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.j1I 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=edf465b12e82a5c3858726da30b6c33a 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aWX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key edf465b12e82a5c3858726da30b6c33a 1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 edf465b12e82a5c3858726da30b6c33a 1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=edf465b12e82a5c3858726da30b6c33a 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aWX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aWX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.aWX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f5eae874d8b4ed98fc2c436bed3226dbfa4bdc7b92a81222 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Mtl 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f5eae874d8b4ed98fc2c436bed3226dbfa4bdc7b92a81222 2 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f5eae874d8b4ed98fc2c436bed3226dbfa4bdc7b92a81222 2 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f5eae874d8b4ed98fc2c436bed3226dbfa4bdc7b92a81222 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:37:49.752 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Mtl 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Mtl 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Mtl 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=14c5c56dbc016049373ec5d615f8fd58 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3NF 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 14c5c56dbc016049373ec5d615f8fd58 0 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 14c5c56dbc016049373ec5d615f8fd58 0 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=14c5c56dbc016049373ec5d615f8fd58 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3NF 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3NF 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3NF 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:37:50.013 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b0ed8d3435a9cb517ee4254df7be877fa4475b4440f471f07056d4cb363a83d 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VgJ 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b0ed8d3435a9cb517ee4254df7be877fa4475b4440f471f07056d4cb363a83d 3 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b0ed8d3435a9cb517ee4254df7be877fa4475b4440f471f07056d4cb363a83d 3 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b0ed8d3435a9cb517ee4254df7be877fa4475b4440f471f07056d4cb363a83d 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VgJ 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VgJ 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VgJ 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3644900 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3644900 ']' 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.013 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mSO 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.05X ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.05X 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Cid 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.kJ1 ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kJ1 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.j1I 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.274 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.aWX ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aWX 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Mtl 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3NF ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3NF 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VgJ 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:50.275 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:50.535 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:50.535 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:53.835 Waiting for block devices as requested 00:37:53.835 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:53.835 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:54.095 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:54.095 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:54.095 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:54.095 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:54.355 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:54.355 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:54.355 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:54.616 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:54.616 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:54.877 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:54.877 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:54.877 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:54.877 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:55.141 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:55.141 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:56.087 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:56.087 No valid GPT data, bailing 00:37:56.087 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:56.088 00:37:56.088 Discovery Log Number of Records 2, Generation counter 2 00:37:56.088 =====Discovery Log Entry 0====== 00:37:56.088 trtype: tcp 00:37:56.088 adrfam: ipv4 00:37:56.088 subtype: current discovery subsystem 00:37:56.088 treq: not specified, sq flow control disable supported 00:37:56.088 portid: 1 00:37:56.088 trsvcid: 4420 00:37:56.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:56.088 traddr: 10.0.0.1 00:37:56.088 eflags: none 00:37:56.088 sectype: none 00:37:56.088 =====Discovery Log Entry 1====== 00:37:56.088 trtype: tcp 00:37:56.088 adrfam: ipv4 00:37:56.088 subtype: nvme subsystem 00:37:56.088 treq: not specified, sq flow control disable supported 00:37:56.088 portid: 1 00:37:56.088 trsvcid: 4420 00:37:56.088 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:56.088 traddr: 10.0.0.1 00:37:56.088 eflags: none 00:37:56.088 sectype: none 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.088 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.350 nvme0n1 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:56.350 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:56.351 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.351 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.612 nvme0n1 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:56.612 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.613 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.874 nvme0n1 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.874 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.135 nvme0n1 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:57.135 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.136 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.396 nvme0n1 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:57.396 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.397 nvme0n1 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.397 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.658 nvme0n1 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.658 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.659 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.659 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.659 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.921 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.921 nvme0n1 00:37:57.921 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.921 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.921 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.921 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.921 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.921 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:58.182 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.183 nvme0n1 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.183 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.445 nvme0n1 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.445 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:58.706 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.707 nvme0n1 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.707 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:37:58.967 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.968 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.230 nvme0n1 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.230 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.231 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 nvme0n1 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.492 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.754 nvme0n1 00:37:59.754 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.754 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.754 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.754 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.754 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.754 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.015 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.276 nvme0n1 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.276 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.536 nvme0n1 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:00.536 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.537 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.109 nvme0n1 00:38:01.109 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.109 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.109 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.109 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.109 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.109 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.110 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.680 nvme0n1 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.680 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.681 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.941 nvme0n1 00:38:01.941 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.941 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.941 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.941 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.941 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.941 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:02.201 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.202 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.462 nvme0n1 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:02.462 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.721 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.722 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.982 nvme0n1 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.982 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.923 nvme0n1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.923 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.495 nvme0n1 00:38:04.495 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.495 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:04.495 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:04.495 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.495 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.496 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.066 nvme0n1 00:38:05.066 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.066 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.066 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.066 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.066 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.325 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.896 nvme0n1 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.896 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.839 nvme0n1 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:38:06.839 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.840 nvme0n1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.840 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.101 nvme0n1 00:38:07.101 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.101 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.102 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.363 nvme0n1 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.363 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.624 nvme0n1 00:38:07.624 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.624 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.625 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.886 nvme0n1 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.886 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.147 nvme0n1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.147 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.438 nvme0n1 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.438 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.439 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.744 nvme0n1 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.744 nvme0n1 00:38:08.744 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.054 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.054 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.054 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.055 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.055 nvme0n1 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.055 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.317 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.317 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:09.317 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.317 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.318 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.579 nvme0n1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.579 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.841 nvme0n1 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.841 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.101 nvme0n1 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.101 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.362 nvme0n1 00:38:10.362 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.622 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.622 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.622 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.622 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.623 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.884 nvme0n1 00:38:10.884 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.885 13:08:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.456 nvme0n1 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:11.456 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:11.457 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:11.457 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:11.457 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:11.457 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.457 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.717 nvme0n1 00:38:11.717 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.717 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:11.717 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:11.717 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.717 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.717 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.977 13:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.238 nvme0n1 00:38:12.238 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.238 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.238 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.238 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.238 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.238 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.499 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.760 nvme0n1 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.760 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:13.020 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.021 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.281 nvme0n1 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:13.281 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:13.542 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:13.542 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.542 13:08:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.112 nvme0n1 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:14.112 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:14.113 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:14.113 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:14.113 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:14.113 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.113 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.686 nvme0n1 00:38:14.686 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.686 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:14.686 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:14.686 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.686 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.686 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.947 13:08:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.526 nvme0n1 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.526 13:08:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.097 nvme0n1 00:38:16.097 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.358 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.359 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.932 nvme0n1 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.932 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.932 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.932 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:16.932 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:16.932 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.932 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:38:16.932 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.933 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.194 nvme0n1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.194 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.455 nvme0n1 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:17.455 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.456 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.717 nvme0n1 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.717 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.979 nvme0n1 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.979 13:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.240 nvme0n1 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.240 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.241 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.501 nvme0n1 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:18.501 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.502 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.763 nvme0n1 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.763 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.026 nvme0n1 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.026 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.027 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.289 nvme0n1 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.289 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.551 nvme0n1 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.551 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.812 nvme0n1 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:19.812 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.813 13:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.074 nvme0n1 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.074 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.337 nvme0n1 00:38:20.337 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.599 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.600 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.862 nvme0n1 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.862 13:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.123 nvme0n1 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.124 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.697 nvme0n1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.697 13:08:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.270 nvme0n1 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.270 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.271 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.533 nvme0n1 00:38:22.533 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.533 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.533 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.533 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.533 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.533 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.794 13:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.055 nvme0n1 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.055 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.317 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.578 nvme0n1 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEyZTAzZTMwMDY0ZTc4NGQ4ZGZmNGEwMWU0YjliZWPFOsw1: 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: ]] 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRjYzIyYjBlZGYwNTNlMWIwZmRiMzk4ZDk2NzY0ZDdhN2JhNDA4YzFjOWI2NmZkZGM3OTg1NDExMjU4OTNkNW/KUaI=: 00:38:23.578 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.579 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.840 13:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.413 nvme0n1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.413 13:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.985 nvme0n1 00:38:24.985 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.985 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.985 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.985 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.985 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.985 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.245 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.814 nvme0n1 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjVlYWU4NzRkOGI0ZWQ5OGZjMmM0MzZiZWQzMjI2ZGJmYTRiZGM3YjkyYTgxMjIy8Vy70A==: 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: ]] 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRjNWM1NmRiYzAxNjA0OTM3M2VjNWQ2MTVmOGZkNTiATB4M: 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:25.814 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.815 13:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.754 nvme0n1 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2IwZWQ4ZDM0MzVhOWNiNTE3ZWU0MjU0ZGY3YmU4NzdmYTQ0NzViNDQ0MGY0NzFmMDcwNTZkNGNiMzYzYTgzZC6Qa7Y=: 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.754 13:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.327 nvme0n1 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:27.327 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.328 request: 00:38:27.328 { 00:38:27.328 "name": "nvme0", 00:38:27.328 "trtype": "tcp", 00:38:27.328 "traddr": "10.0.0.1", 00:38:27.328 "adrfam": "ipv4", 00:38:27.328 "trsvcid": "4420", 00:38:27.328 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:27.328 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:27.328 "prchk_reftag": false, 00:38:27.328 "prchk_guard": false, 00:38:27.328 "hdgst": false, 00:38:27.328 "ddgst": false, 00:38:27.328 "allow_unrecognized_csi": false, 00:38:27.328 "method": "bdev_nvme_attach_controller", 00:38:27.328 "req_id": 1 00:38:27.328 } 00:38:27.328 Got JSON-RPC error response 00:38:27.328 response: 00:38:27.328 { 00:38:27.328 "code": -5, 00:38:27.328 "message": "Input/output error" 00:38:27.328 } 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.328 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.590 request: 00:38:27.590 { 00:38:27.590 "name": "nvme0", 00:38:27.590 "trtype": "tcp", 00:38:27.590 "traddr": "10.0.0.1", 00:38:27.590 "adrfam": "ipv4", 00:38:27.590 "trsvcid": "4420", 00:38:27.590 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:27.590 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:27.590 "prchk_reftag": false, 00:38:27.590 "prchk_guard": false, 00:38:27.590 "hdgst": false, 00:38:27.590 "ddgst": false, 00:38:27.590 "dhchap_key": "key2", 00:38:27.590 "allow_unrecognized_csi": false, 00:38:27.590 "method": "bdev_nvme_attach_controller", 00:38:27.590 "req_id": 1 00:38:27.590 } 00:38:27.590 Got JSON-RPC error response 00:38:27.590 response: 00:38:27.590 { 00:38:27.590 "code": -5, 00:38:27.590 "message": "Input/output error" 00:38:27.590 } 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.590 request: 00:38:27.590 { 00:38:27.590 "name": "nvme0", 00:38:27.590 "trtype": "tcp", 00:38:27.590 "traddr": "10.0.0.1", 00:38:27.590 "adrfam": "ipv4", 00:38:27.590 "trsvcid": "4420", 00:38:27.590 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:27.590 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:27.590 "prchk_reftag": false, 00:38:27.590 "prchk_guard": false, 00:38:27.590 "hdgst": false, 00:38:27.590 "ddgst": false, 00:38:27.590 "dhchap_key": "key1", 00:38:27.590 "dhchap_ctrlr_key": "ckey2", 00:38:27.590 "allow_unrecognized_csi": false, 00:38:27.590 "method": "bdev_nvme_attach_controller", 00:38:27.590 "req_id": 1 00:38:27.590 } 00:38:27.590 Got JSON-RPC error response 00:38:27.590 response: 00:38:27.590 { 00:38:27.590 "code": -5, 00:38:27.590 "message": "Input/output error" 00:38:27.590 } 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:27.590 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:27.591 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:27.591 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:38:27.591 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.591 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.851 nvme0n1 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.851 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.852 request: 00:38:27.852 { 00:38:27.852 "name": "nvme0", 00:38:27.852 "dhchap_key": "key1", 00:38:27.852 "dhchap_ctrlr_key": "ckey2", 00:38:27.852 "method": "bdev_nvme_set_keys", 00:38:27.852 "req_id": 1 00:38:27.852 } 00:38:27.852 Got JSON-RPC error response 00:38:27.852 response: 00:38:27.852 { 00:38:27.852 "code": -13, 00:38:27.852 "message": "Permission denied" 00:38:27.852 } 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:27.852 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:28.112 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.112 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:38:28.112 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.112 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.112 13:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.112 13:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:38:28.112 13:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:38:29.055 13:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:30.000 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMyNmI4ZGQ1MWU1MWNjZjRkMTcwODRiMWFiMzRkOTI5MDFjYzY1Zjc5YTc5OWQ0tNM57Q==: 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2I2MjVhOGE0NWU0OThjNTYwNTc3Y2Y5ZWVmM2U0YWVlMjUwYzE0ZTNiOGJmNTExSpZ+SA==: 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.261 nvme0n1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2Y0ZTFhY2JhNWJlMWYxNjk0NjgzOWUyYWNmZjUwMGKs/6yB: 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRmNDY1YjEyZTgyYTVjMzg1ODcyNmRhMzBiNmMzM2EzdEQ7: 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.261 request: 00:38:30.261 { 00:38:30.261 "name": "nvme0", 00:38:30.261 "dhchap_key": "key2", 00:38:30.261 "dhchap_ctrlr_key": "ckey1", 00:38:30.261 "method": "bdev_nvme_set_keys", 00:38:30.261 "req_id": 1 00:38:30.261 } 00:38:30.261 Got JSON-RPC error response 00:38:30.261 response: 00:38:30.261 { 00:38:30.261 "code": -13, 00:38:30.261 "message": "Permission denied" 00:38:30.261 } 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.261 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.522 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:38:30.522 13:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:31.465 rmmod nvme_tcp 00:38:31.465 rmmod nvme_fabrics 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3644900 ']' 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3644900 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3644900 ']' 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3644900 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3644900 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3644900' 00:38:31.465 killing process with pid 3644900 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3644900 00:38:31.465 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3644900 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.726 13:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:33.642 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:33.904 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:33.904 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:33.904 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:33.904 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:33.904 13:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:37.206 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:37.206 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:37.467 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:38.041 13:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mSO /tmp/spdk.key-null.Cid /tmp/spdk.key-sha256.j1I /tmp/spdk.key-sha384.Mtl /tmp/spdk.key-sha512.VgJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:38:38.041 13:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:41.348 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:41.348 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:41.348 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:41.923 00:38:41.923 real 1m1.067s 00:38:41.923 user 0m54.799s 00:38:41.923 sys 0m16.130s 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.923 ************************************ 00:38:41.923 END TEST nvmf_auth_host 00:38:41.923 ************************************ 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.923 ************************************ 00:38:41.923 START TEST nvmf_digest 00:38:41.923 ************************************ 00:38:41.923 13:09:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:41.923 * Looking for test storage... 00:38:41.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:41.923 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:41.923 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:38:41.923 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:42.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.186 --rc genhtml_branch_coverage=1 00:38:42.186 --rc genhtml_function_coverage=1 00:38:42.186 --rc genhtml_legend=1 00:38:42.186 --rc geninfo_all_blocks=1 00:38:42.186 --rc geninfo_unexecuted_blocks=1 00:38:42.186 00:38:42.186 ' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:42.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.186 --rc genhtml_branch_coverage=1 00:38:42.186 --rc genhtml_function_coverage=1 00:38:42.186 --rc genhtml_legend=1 00:38:42.186 --rc geninfo_all_blocks=1 00:38:42.186 --rc geninfo_unexecuted_blocks=1 00:38:42.186 00:38:42.186 ' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:42.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.186 --rc genhtml_branch_coverage=1 00:38:42.186 --rc genhtml_function_coverage=1 00:38:42.186 --rc genhtml_legend=1 00:38:42.186 --rc geninfo_all_blocks=1 00:38:42.186 --rc geninfo_unexecuted_blocks=1 00:38:42.186 00:38:42.186 ' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:42.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.186 --rc genhtml_branch_coverage=1 00:38:42.186 --rc genhtml_function_coverage=1 00:38:42.186 --rc genhtml_legend=1 00:38:42.186 --rc geninfo_all_blocks=1 00:38:42.186 --rc geninfo_unexecuted_blocks=1 00:38:42.186 00:38:42.186 ' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:42.186 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:42.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:38:42.187 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:50.334 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:50.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:50.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:50.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:50.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:50.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:50.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:38:50.335 00:38:50.335 --- 10.0.0.2 ping statistics --- 00:38:50.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.335 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:50.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:50.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:38:50.335 00:38:50.335 --- 10.0.0.1 ping statistics --- 00:38:50.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.335 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:50.335 ************************************ 00:38:50.335 START TEST nvmf_digest_clean 00:38:50.335 ************************************ 00:38:50.335 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3662338 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3662338 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3662338 ']' 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.336 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:50.336 [2024-11-28 13:09:19.730551] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:38:50.336 [2024-11-28 13:09:19.730599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:50.336 [2024-11-28 13:09:19.869536] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:50.336 [2024-11-28 13:09:19.928771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.336 [2024-11-28 13:09:19.945632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:50.336 [2024-11-28 13:09:19.945664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:50.336 [2024-11-28 13:09:19.945671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:50.336 [2024-11-28 13:09:19.945678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:50.336 [2024-11-28 13:09:19.945684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:50.336 [2024-11-28 13:09:19.946309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:50.599 null0 00:38:50.599 [2024-11-28 13:09:20.643533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.599 [2024-11-28 13:09:20.667694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:50.599 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3662472 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3662472 /var/tmp/bperf.sock 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3662472 ']' 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:50.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.600 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:50.861 [2024-11-28 13:09:20.736003] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:38:50.861 [2024-11-28 13:09:20.736058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662472 ] 00:38:50.861 [2024-11-28 13:09:20.870447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:50.861 [2024-11-28 13:09:20.929979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.861 [2024-11-28 13:09:20.957807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.434 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.434 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:38:51.434 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:51.434 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:51.434 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:51.696 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:51.696 13:09:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:52.267 nvme0n1 00:38:52.267 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:52.267 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:52.267 Running I/O for 2 seconds... 00:38:54.152 18502.00 IOPS, 72.27 MiB/s [2024-11-28T12:09:24.540Z] 19056.00 IOPS, 74.44 MiB/s 00:38:54.413 Latency(us) 00:38:54.413 [2024-11-28T12:09:24.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.413 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:54.413 nvme0n1 : 2.00 19092.75 74.58 0.00 0.00 6697.99 2778.11 14780.09 00:38:54.413 [2024-11-28T12:09:24.540Z] =================================================================================================================== 00:38:54.413 [2024-11-28T12:09:24.540Z] Total : 19092.75 74.58 0.00 0.00 6697.99 2778.11 14780.09 00:38:54.413 { 00:38:54.413 "results": [ 00:38:54.413 { 00:38:54.413 "job": "nvme0n1", 00:38:54.413 "core_mask": "0x2", 00:38:54.413 "workload": "randread", 00:38:54.413 "status": "finished", 00:38:54.413 "queue_depth": 128, 00:38:54.413 "io_size": 4096, 00:38:54.413 "runtime": 2.002855, 00:38:54.413 "iops": 19092.74510636067, 00:38:54.413 "mibps": 74.58103557172137, 00:38:54.413 "io_failed": 0, 00:38:54.413 "io_timeout": 0, 00:38:54.413 "avg_latency_us": 6697.992543270419, 00:38:54.413 "min_latency_us": 2778.1089208152357, 00:38:54.413 "max_latency_us": 14780.086869361845 00:38:54.413 } 00:38:54.413 ], 00:38:54.413 "core_count": 1 00:38:54.413 } 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:54.413 | select(.opcode=="crc32c") 00:38:54.413 | "\(.module_name) \(.executed)"' 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3662472 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3662472 ']' 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3662472 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:54.413 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3662472 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3662472' 00:38:54.675 killing process with pid 3662472 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3662472 00:38:54.675 Received shutdown signal, test time was about 2.000000 seconds 00:38:54.675 00:38:54.675 Latency(us) 00:38:54.675 [2024-11-28T12:09:24.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.675 [2024-11-28T12:09:24.802Z] =================================================================================================================== 00:38:54.675 [2024-11-28T12:09:24.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3662472 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3663161 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3663161 /var/tmp/bperf.sock 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3663161 ']' 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:54.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.675 13:09:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:54.675 [2024-11-28 13:09:24.690961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:38:54.675 [2024-11-28 13:09:24.691020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663161 ] 00:38:54.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:54.675 Zero copy mechanism will not be used. 00:38:54.980 [2024-11-28 13:09:24.823527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:54.980 [2024-11-28 13:09:24.876564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.980 [2024-11-28 13:09:24.892896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:55.614 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:56.185 nvme0n1 00:38:56.185 13:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:56.185 13:09:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:56.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:56.185 Zero copy mechanism will not be used. 00:38:56.185 Running I/O for 2 seconds... 00:38:58.072 3019.00 IOPS, 377.38 MiB/s [2024-11-28T12:09:28.199Z] 3509.50 IOPS, 438.69 MiB/s 00:38:58.072 Latency(us) 00:38:58.072 [2024-11-28T12:09:28.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:58.072 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:58.072 nvme0n1 : 2.01 3506.94 438.37 0.00 0.00 4558.85 608.99 12207.26 00:38:58.072 [2024-11-28T12:09:28.199Z] =================================================================================================================== 00:38:58.072 [2024-11-28T12:09:28.199Z] Total : 3506.94 438.37 0.00 0.00 4558.85 608.99 12207.26 00:38:58.072 { 00:38:58.072 "results": [ 00:38:58.072 { 00:38:58.072 "job": "nvme0n1", 00:38:58.072 "core_mask": "0x2", 00:38:58.072 "workload": "randread", 00:38:58.072 "status": "finished", 00:38:58.072 "queue_depth": 16, 00:38:58.072 "io_size": 131072, 00:38:58.072 "runtime": 2.006025, 00:38:58.072 "iops": 3506.9353572363257, 00:38:58.072 "mibps": 438.3669196545407, 00:38:58.072 "io_failed": 0, 00:38:58.072 "io_timeout": 0, 00:38:58.072 "avg_latency_us": 4558.851829345468, 00:38:58.072 "min_latency_us": 608.9943200801871, 00:38:58.072 "max_latency_us": 12207.256932843302 00:38:58.072 } 00:38:58.072 ], 00:38:58.072 "core_count": 1 00:38:58.072 } 00:38:58.072 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:58.072 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:58.072 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:58.072 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:58.072 | select(.opcode=="crc32c") 00:38:58.072 | "\(.module_name) \(.executed)"' 00:38:58.072 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3663161 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3663161 ']' 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3663161 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663161 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663161' 00:38:58.333 killing process with pid 3663161 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3663161 00:38:58.333 Received shutdown signal, test time was about 2.000000 seconds 00:38:58.333 00:38:58.333 Latency(us) 00:38:58.333 [2024-11-28T12:09:28.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:58.333 [2024-11-28T12:09:28.460Z] =================================================================================================================== 00:38:58.333 [2024-11-28T12:09:28.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:58.333 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3663161 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3663959 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3663959 /var/tmp/bperf.sock 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3663959 ']' 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:58.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:58.594 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:58.595 13:09:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:58.595 [2024-11-28 13:09:28.588168] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:38:58.595 [2024-11-28 13:09:28.588223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663959 ] 00:38:58.856 [2024-11-28 13:09:28.720980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:58.856 [2024-11-28 13:09:28.774026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.856 [2024-11-28 13:09:28.789651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.427 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.427 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:38:59.427 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:59.427 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:59.427 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:59.688 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:59.688 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:59.951 nvme0n1 00:38:59.951 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:59.951 13:09:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:59.951 Running I/O for 2 seconds... 00:39:02.280 29781.00 IOPS, 116.33 MiB/s [2024-11-28T12:09:32.407Z] 29486.50 IOPS, 115.18 MiB/s 00:39:02.280 Latency(us) 00:39:02.280 [2024-11-28T12:09:32.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.280 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:02.280 nvme0n1 : 2.01 29488.99 115.19 0.00 0.00 4333.35 2093.85 12371.48 00:39:02.280 [2024-11-28T12:09:32.407Z] =================================================================================================================== 00:39:02.280 [2024-11-28T12:09:32.407Z] Total : 29488.99 115.19 0.00 0.00 4333.35 2093.85 12371.48 00:39:02.280 { 00:39:02.280 "results": [ 00:39:02.280 { 00:39:02.280 "job": "nvme0n1", 00:39:02.280 "core_mask": "0x2", 00:39:02.280 "workload": "randwrite", 00:39:02.280 "status": "finished", 00:39:02.280 "queue_depth": 128, 00:39:02.280 "io_size": 4096, 00:39:02.280 "runtime": 2.005528, 00:39:02.280 "iops": 29488.992424937474, 00:39:02.280 "mibps": 115.19137665991201, 00:39:02.280 "io_failed": 0, 00:39:02.280 "io_timeout": 0, 00:39:02.280 "avg_latency_us": 4333.345028029731, 00:39:02.280 "min_latency_us": 2093.845639826261, 00:39:02.280 "max_latency_us": 12371.480120280656 00:39:02.280 } 00:39:02.280 ], 00:39:02.280 "core_count": 1 00:39:02.280 } 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:02.280 | select(.opcode=="crc32c") 00:39:02.280 | "\(.module_name) \(.executed)"' 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3663959 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3663959 ']' 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3663959 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663959 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663959' 00:39:02.280 killing process with pid 3663959 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3663959 00:39:02.280 Received shutdown signal, test time was about 2.000000 seconds 00:39:02.280 00:39:02.280 Latency(us) 00:39:02.280 [2024-11-28T12:09:32.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.280 [2024-11-28T12:09:32.407Z] =================================================================================================================== 00:39:02.280 [2024-11-28T12:09:32.407Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:02.280 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3663959 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3664765 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3664765 /var/tmp/bperf.sock 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3664765 ']' 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:02.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:02.541 13:09:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:02.541 [2024-11-28 13:09:32.504671] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:02.541 [2024-11-28 13:09:32.504734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664765 ] 00:39:02.541 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:02.541 Zero copy mechanism will not be used. 00:39:02.541 [2024-11-28 13:09:32.636845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:02.802 [2024-11-28 13:09:32.689471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.802 [2024-11-28 13:09:32.705574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.373 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:03.373 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:39:03.373 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:03.373 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:03.373 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:03.634 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:03.634 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:03.895 nvme0n1 00:39:03.895 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:03.895 13:09:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:03.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:03.895 Zero copy mechanism will not be used. 00:39:03.895 Running I/O for 2 seconds... 00:39:05.778 7752.00 IOPS, 969.00 MiB/s [2024-11-28T12:09:35.905Z] 7532.00 IOPS, 941.50 MiB/s 00:39:05.778 Latency(us) 00:39:05.778 [2024-11-28T12:09:35.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.778 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:05.778 nvme0n1 : 2.00 7530.91 941.36 0.00 0.00 2121.26 1238.52 8046.94 00:39:05.778 [2024-11-28T12:09:35.905Z] =================================================================================================================== 00:39:05.778 [2024-11-28T12:09:35.905Z] Total : 7530.91 941.36 0.00 0.00 2121.26 1238.52 8046.94 00:39:05.778 { 00:39:05.778 "results": [ 00:39:05.778 { 00:39:05.778 "job": "nvme0n1", 00:39:05.778 "core_mask": "0x2", 00:39:05.778 "workload": "randwrite", 00:39:05.778 "status": "finished", 00:39:05.778 "queue_depth": 16, 00:39:05.778 "io_size": 131072, 00:39:05.778 "runtime": 2.002413, 00:39:05.778 "iops": 7530.91395231653, 00:39:05.778 "mibps": 941.3642440395663, 00:39:05.778 "io_failed": 0, 00:39:05.778 "io_timeout": 0, 00:39:05.778 "avg_latency_us": 2121.260185348483, 00:39:05.778 "min_latency_us": 1238.5165385900434, 00:39:05.778 "max_latency_us": 8046.936184430338 00:39:05.778 } 00:39:05.778 ], 00:39:05.778 "core_count": 1 00:39:05.778 } 00:39:06.039 13:09:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:06.039 13:09:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:06.039 13:09:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:06.039 13:09:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:06.039 | select(.opcode=="crc32c") 00:39:06.039 | "\(.module_name) \(.executed)"' 00:39:06.039 13:09:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:06.039 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3664765 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3664765 ']' 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3664765 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.040 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3664765 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3664765' 00:39:06.300 killing process with pid 3664765 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3664765 00:39:06.300 Received shutdown signal, test time was about 2.000000 seconds 00:39:06.300 00:39:06.300 Latency(us) 00:39:06.300 [2024-11-28T12:09:36.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.300 [2024-11-28T12:09:36.427Z] =================================================================================================================== 00:39:06.300 [2024-11-28T12:09:36.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3664765 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3662338 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3662338 ']' 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3662338 00:39:06.300 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3662338 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3662338' 00:39:06.301 killing process with pid 3662338 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3662338 00:39:06.301 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3662338 00:39:06.562 00:39:06.562 real 0m16.780s 00:39:06.562 user 0m32.637s 00:39:06.562 sys 0m3.835s 00:39:06.562 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.562 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:06.562 ************************************ 00:39:06.562 END TEST nvmf_digest_clean 00:39:06.562 ************************************ 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:06.563 ************************************ 00:39:06.563 START TEST nvmf_digest_error 00:39:06.563 ************************************ 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3665573 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3665573 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3665573 ']' 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.563 13:09:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:06.563 [2024-11-28 13:09:36.601329] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:06.563 [2024-11-28 13:09:36.601393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:06.823 [2024-11-28 13:09:36.739888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:06.823 [2024-11-28 13:09:36.791401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.823 [2024-11-28 13:09:36.806401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:06.823 [2024-11-28 13:09:36.806425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:06.823 [2024-11-28 13:09:36.806430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:06.823 [2024-11-28 13:09:36.806435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:06.823 [2024-11-28 13:09:36.806439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:06.823 [2024-11-28 13:09:36.806894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:07.394 [2024-11-28 13:09:37.423166] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.394 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:07.394 null0 00:39:07.394 [2024-11-28 13:09:37.496057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.656 [2024-11-28 13:09:37.520208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3665690 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3665690 /var/tmp/bperf.sock 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3665690 ']' 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:07.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.656 13:09:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:07.656 [2024-11-28 13:09:37.576136] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:07.656 [2024-11-28 13:09:37.576190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665690 ] 00:39:07.656 [2024-11-28 13:09:37.708535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:07.656 [2024-11-28 13:09:37.763393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.656 [2024-11-28 13:09:37.779693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:08.600 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:08.861 nvme0n1 00:39:08.861 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:08.861 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.861 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:08.861 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.861 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:08.861 13:09:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:08.861 Running I/O for 2 seconds... 00:39:08.861 [2024-11-28 13:09:38.980450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:08.861 [2024-11-28 13:09:38.980478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.861 [2024-11-28 13:09:38.980487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.122 [2024-11-28 13:09:38.991270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.122 [2024-11-28 13:09:38.991291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.122 [2024-11-28 13:09:38.991298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.000086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.000111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.008978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.008997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.009008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.018528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.018547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.018554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.027373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.027390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.027397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.034930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.034948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.034954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.044749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.044766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.044772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.054876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.054893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.054900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.064185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.064202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.064208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.073533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.073550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.073556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.085212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.085230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.085236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.095498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.095519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.095526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.104902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.104919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.104925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.113865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.113882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.113889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.123670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.123688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.123694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.130946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.130964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.130971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.141383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.141400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.141406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.151556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.151575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.151581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.160876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.160892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.160898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.172242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.172260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.172266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.184276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.184293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.184300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.193954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.193972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.193978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.202258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.202275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.202282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.211191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.211208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.211214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.219557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.228774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.228791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.228797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.237949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.237966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.237972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.123 [2024-11-28 13:09:39.246156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.123 [2024-11-28 13:09:39.246177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.123 [2024-11-28 13:09:39.246183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.255703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.255720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.255730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.264423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.264440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.264446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.274549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.274567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.274573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.283875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.283892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.283898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.293198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.293215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.293221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.302959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.302976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.302982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.313441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.313458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.313464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.321740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.321757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.321763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.331898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.331916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.331923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.340665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.340682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.349338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.349356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.349362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.358075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.358092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.385 [2024-11-28 13:09:39.358099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.385 [2024-11-28 13:09:39.367616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.385 [2024-11-28 13:09:39.367634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.367640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.376238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.376254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.376261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.385375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.385392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.385398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.395037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.395054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.395060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.403041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.403058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.403064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.411232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.411250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.411259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.420840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.420856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.420863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.429771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.429788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.429794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.439645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.439663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.439669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.449534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.449552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.449558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.457481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.457499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.457505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.468710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.468727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.468734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.477507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.477524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.477531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.486420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.486437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.486444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.494334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.494354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.494360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.386 [2024-11-28 13:09:39.504446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.386 [2024-11-28 13:09:39.504463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.386 [2024-11-28 13:09:39.504469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.647 [2024-11-28 13:09:39.514154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.647 [2024-11-28 13:09:39.514175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.647 [2024-11-28 13:09:39.514182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.647 [2024-11-28 13:09:39.522165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.647 [2024-11-28 13:09:39.522182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.522188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.531212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.531236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.541827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.541844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.541851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.550471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.550488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.550495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.558892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.558909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.558915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.567969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.567986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.567992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.576802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.576819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.576825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.585192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.585215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.594532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.594549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.594555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.603617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.603634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.603640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.612365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.612382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.612389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.624165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.624182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.624188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.633197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.633214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.633220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.644390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.644407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.644414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.653724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.653740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.653750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.665106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.665123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.665130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.677346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.677363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.677369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.685320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.685338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.685345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.695373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.695397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.703733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.703752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.703759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.712393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.712410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.712417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.722631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.722649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.722655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.732387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.732405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.732411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.648 [2024-11-28 13:09:39.741266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.648 [2024-11-28 13:09:39.741283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.648 [2024-11-28 13:09:39.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.649 [2024-11-28 13:09:39.750257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.649 [2024-11-28 13:09:39.750274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.649 [2024-11-28 13:09:39.750280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.649 [2024-11-28 13:09:39.759025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.649 [2024-11-28 13:09:39.759043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.649 [2024-11-28 13:09:39.759049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.649 [2024-11-28 13:09:39.768004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.649 [2024-11-28 13:09:39.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.649 [2024-11-28 13:09:39.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.777702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.777720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.788806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.788823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.788829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.797006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.797023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.797030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.806090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.806107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.806113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.815063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.815080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.815089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.823934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.823951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.823957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.833558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.833576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.833582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.842840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.910 [2024-11-28 13:09:39.842857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.910 [2024-11-28 13:09:39.842863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.910 [2024-11-28 13:09:39.851353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.851370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.851376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.860356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.860373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.860379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.869242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.869259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.869265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.878145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.878167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.878174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.886780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.886797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.886804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.895554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.895573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.895579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.904673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.904689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.913658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.913681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.922872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.922889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.922895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.931486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.931503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.931510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.940416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.940433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.940439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.949606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.949622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.949628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.959302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.959319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.959325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 27200.00 IOPS, 106.25 MiB/s [2024-11-28T12:09:40.038Z] [2024-11-28 13:09:39.968018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.968035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.968042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.977860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.977878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.977884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.986505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.986521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.986528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:39.996172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:39.996189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:39.996196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:40.005875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:40.005893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:40.005899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:40.015483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:40.015500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:40.015506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:40.024273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:40.024290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:40.024297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.911 [2024-11-28 13:09:40.032150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:09.911 [2024-11-28 13:09:40.032171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.911 [2024-11-28 13:09:40.032178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.172 [2024-11-28 13:09:40.041877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.172 [2024-11-28 13:09:40.041894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.172 [2024-11-28 13:09:40.041900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.172 [2024-11-28 13:09:40.050734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.172 [2024-11-28 13:09:40.050752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.172 [2024-11-28 13:09:40.050763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.172 [2024-11-28 13:09:40.059320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.172 [2024-11-28 13:09:40.059337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.172 [2024-11-28 13:09:40.059343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.172 [2024-11-28 13:09:40.068849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.172 [2024-11-28 13:09:40.068866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.172 [2024-11-28 13:09:40.068872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.172 [2024-11-28 13:09:40.076844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.172 [2024-11-28 13:09:40.076863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.076869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.087061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.087078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.087084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.097984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.098001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.098008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.108645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.108662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.108669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.116418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.116434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.116441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.125620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.125637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.125643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.135789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.135806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.135812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.145435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.145452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.145458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.153564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.153581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.153587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.162714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.162731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.162737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.170854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.170871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.170877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.180183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.180200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.180206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.189226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.189243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.189249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.198350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.198367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.198373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.207604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.207621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.207630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.216077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.216094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.216100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.225149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.225170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.225177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.234081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.234097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.234104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.242062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.242079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.242085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.252809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.252825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.252831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.262374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.262391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.173 [2024-11-28 13:09:40.262398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.173 [2024-11-28 13:09:40.270898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.173 [2024-11-28 13:09:40.270914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.174 [2024-11-28 13:09:40.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.174 [2024-11-28 13:09:40.280666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.174 [2024-11-28 13:09:40.280683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.174 [2024-11-28 13:09:40.280690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.174 [2024-11-28 13:09:40.292705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.174 [2024-11-28 13:09:40.292725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.174 [2024-11-28 13:09:40.292731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.301313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.301330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.301336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.310343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.310360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.310367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.319227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.319244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.319251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.328845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.328861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.328868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.337406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.337422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.337429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.347868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.347885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.358024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.358040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.358047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.366514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.366530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.375818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.375835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.375841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.383700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.383716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.383722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.393822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.393839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.393845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.436 [2024-11-28 13:09:40.402384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.436 [2024-11-28 13:09:40.402401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.436 [2024-11-28 13:09:40.402407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.411961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.411978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.411985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.421379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.421396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.421403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.430298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.430315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.430321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.439802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.439818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.439824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.448428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.448444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.448454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.456787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.456803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.456810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.465792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.465809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.465816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.477598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.477615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.477621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.486489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.486505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.486511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.495161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.495178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.495184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.503697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.503714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.503720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.514738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.514755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.514761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.526364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.526381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.526387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.535039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.535056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.535063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.543232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.543250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.543256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.437 [2024-11-28 13:09:40.552342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.437 [2024-11-28 13:09:40.552358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.437 [2024-11-28 13:09:40.552365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.561338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.561355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.561361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.570881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.570898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.570904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.579808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.579825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.579832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.588688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.588704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.588711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.597603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.597620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.597626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.607817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.607834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.607844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.616374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.616390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.616397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.624903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.624920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.624926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.633950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.633966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.633972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.643270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.643287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.652290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.652306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.652313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.660982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.660999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.661005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.670509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.699 [2024-11-28 13:09:40.670532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.699 [2024-11-28 13:09:40.679368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.699 [2024-11-28 13:09:40.679386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.679392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.687547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.687567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.687573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.696987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.697003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.697009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.705760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.705777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.705784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.715188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.715205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.715213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.723563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.723580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.723587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.733850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.733868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.733874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.741865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.741885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.741891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.751165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.751181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.751188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.760122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.760139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.760145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.768180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.768197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.768203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.777512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.777529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.777536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.788633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.788650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.788657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.798724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.798740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.798746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.807332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.807349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.807355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.700 [2024-11-28 13:09:40.816359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.700 [2024-11-28 13:09:40.816375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.700 [2024-11-28 13:09:40.816381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.825811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.825828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.825835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.835249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.835266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.835272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.843497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.843513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.843523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.853970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.853986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.853993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.862303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.862319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.862326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.870761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.870777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.870784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.879953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.879969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.879975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.889227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.889244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.889250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.898252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.898269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.898276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.908179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.908197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.908204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.917270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.917287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.917293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.926372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.926389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.926395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.934985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.935002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.935009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.944680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.944697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.944703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.953529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.953545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.953552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 [2024-11-28 13:09:40.962283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b7dc0) 00:39:10.962 [2024-11-28 13:09:40.962300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.962 [2024-11-28 13:09:40.962306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.962 27437.00 IOPS, 107.18 MiB/s 00:39:10.962 Latency(us) 00:39:10.962 [2024-11-28T12:09:41.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.962 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:10.962 nvme0n1 : 2.04 26917.40 105.15 0.00 0.00 4657.71 2230.70 43573.89 00:39:10.962 [2024-11-28T12:09:41.089Z] =================================================================================================================== 00:39:10.962 [2024-11-28T12:09:41.089Z] Total : 26917.40 105.15 0.00 0.00 4657.71 2230.70 43573.89 00:39:10.962 { 00:39:10.962 "results": [ 00:39:10.962 { 00:39:10.962 "job": "nvme0n1", 00:39:10.962 "core_mask": "0x2", 00:39:10.962 "workload": "randread", 00:39:10.962 "status": "finished", 00:39:10.962 "queue_depth": 128, 00:39:10.962 "io_size": 4096, 00:39:10.962 "runtime": 2.043362, 00:39:10.962 "iops": 26917.403768886765, 00:39:10.962 "mibps": 105.14610847221392, 00:39:10.962 "io_failed": 0, 00:39:10.962 "io_timeout": 0, 00:39:10.962 "avg_latency_us": 4657.708943135598, 00:39:10.962 "min_latency_us": 2230.698296024056, 00:39:10.962 "max_latency_us": 43573.88573337788 00:39:10.962 } 00:39:10.962 ], 00:39:10.962 "core_count": 1 00:39:10.962 } 00:39:10.962 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:10.963 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:10.963 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:10.963 | .driver_specific 00:39:10.963 | .nvme_error 00:39:10.963 | .status_code 00:39:10.963 | .command_transient_transport_error' 00:39:10.963 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3665690 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3665690 ']' 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3665690 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665690 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665690' 00:39:11.223 killing process with pid 3665690 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3665690 00:39:11.223 Received shutdown signal, test time was about 2.000000 seconds 00:39:11.223 00:39:11.223 Latency(us) 00:39:11.223 [2024-11-28T12:09:41.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.223 [2024-11-28T12:09:41.350Z] =================================================================================================================== 00:39:11.223 [2024-11-28T12:09:41.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:11.223 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3665690 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3666478 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3666478 /var/tmp/bperf.sock 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3666478 ']' 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:11.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.484 13:09:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:11.484 [2024-11-28 13:09:41.427090] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:11.484 [2024-11-28 13:09:41.427148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666478 ] 00:39:11.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:11.484 Zero copy mechanism will not be used. 00:39:11.484 [2024-11-28 13:09:41.559550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:11.744 [2024-11-28 13:09:41.614084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.745 [2024-11-28 13:09:41.630094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:12.315 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:12.575 nvme0n1 00:39:12.575 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:12.575 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.575 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:12.575 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.575 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:12.575 13:09:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:12.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:12.837 Zero copy mechanism will not be used. 00:39:12.837 Running I/O for 2 seconds... 00:39:12.837 [2024-11-28 13:09:42.748770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.748804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.748813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.759843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.759866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.759874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.771527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.771552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.771559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.782770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.782790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.782797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.795226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.795245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.795252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.806331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.806354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.806362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.817138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.817163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.817170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.827303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.827321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.827328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.836601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.836619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.836625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.842467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.842484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.842491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.853408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.853427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.853434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.862960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.862979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.862985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.870251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.870269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.870275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.881036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.881054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.881061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.891883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.891902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.891908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.902879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.902898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.902904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:12.837 [2024-11-28 13:09:42.912593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.837 [2024-11-28 13:09:42.912611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.837 [2024-11-28 13:09:42.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:12.838 [2024-11-28 13:09:42.921567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.838 [2024-11-28 13:09:42.921586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.838 [2024-11-28 13:09:42.921592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:12.838 [2024-11-28 13:09:42.932402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.838 [2024-11-28 13:09:42.932421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.838 [2024-11-28 13:09:42.932427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:12.838 [2024-11-28 13:09:42.938993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.838 [2024-11-28 13:09:42.939011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.838 [2024-11-28 13:09:42.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:12.838 [2024-11-28 13:09:42.946570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.838 [2024-11-28 13:09:42.946589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.838 [2024-11-28 13:09:42.946595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:12.838 [2024-11-28 13:09:42.956938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:12.838 [2024-11-28 13:09:42.956956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.838 [2024-11-28 13:09:42.956962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:42.962776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:42.962794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:42.962801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:42.973672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:42.973690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:42.973697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:42.985403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:42.985422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:42.985428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:42.996682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:42.996700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:42.996707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:43.008525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:43.008543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:43.008550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:43.019202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:43.019220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:43.019226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:43.030923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:43.030942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.100 [2024-11-28 13:09:43.030949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.100 [2024-11-28 13:09:43.041944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.100 [2024-11-28 13:09:43.041963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.041969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.053561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.053580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.053586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.059310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.059327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.059333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.065385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.065404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.065410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.075864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.075882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.075889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.081062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.081080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.081087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.088323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.088342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.088348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.095151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.095174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.095187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.103480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.103499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.103507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.112772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.112791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.112797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.120222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.120240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.120246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.130448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.130466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.130472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.137895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.137914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.137920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.142596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.142614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.142620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.150244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.150263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.150269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.162009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.162028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.162034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.173110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.173132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.173138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.181938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.181957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.181963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.188021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.188039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.188046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.196487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.196506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.196512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.204006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.101 [2024-11-28 13:09:43.204024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.101 [2024-11-28 13:09:43.204031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.101 [2024-11-28 13:09:43.209111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.102 [2024-11-28 13:09:43.209129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.102 [2024-11-28 13:09:43.209136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.102 [2024-11-28 13:09:43.214494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.102 [2024-11-28 13:09:43.214512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.102 [2024-11-28 13:09:43.214518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.102 [2024-11-28 13:09:43.222355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.102 [2024-11-28 13:09:43.222373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.102 [2024-11-28 13:09:43.222380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.228392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.228411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.228418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.235201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.235220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.235226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.245510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.245529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.245535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.257360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.257378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.257385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.264900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.264918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.264924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.269911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.269930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.269936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.274177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.274196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.274203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.282696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.282715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.282721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.287784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.287802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.292559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.292577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.292587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.297556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.297574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.297580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.302445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.302464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.302470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.312676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.312695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.319647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.319665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.319672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.324118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.324137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.324143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.330414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.330433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.330439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.338991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.339010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.339016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.347064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.347082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.347088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.352452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.352470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.361815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.361833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.361839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.370042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.370061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.370067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.376860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.376879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.376885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.385267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.385285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.385291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.394912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.394933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.394941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.402242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.402261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.402267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.409602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.409620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.409626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.416545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.416562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.416572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.419407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.419424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.419430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.428298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.428316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.428322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.436782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.436800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.436806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.445873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.445891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.445898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.450192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.450209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.450216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.457349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.457371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.457378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.463623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.463641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.463648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.471991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.472010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.472016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.363 [2024-11-28 13:09:43.479603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.363 [2024-11-28 13:09:43.479624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.363 [2024-11-28 13:09:43.479630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.488327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.488345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.488351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.494755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.494774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.494781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.499240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.499258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.499264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.505560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.505579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.505585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.510143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.510166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.510172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.517858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.517876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.517882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.522235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.522253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.522259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.533501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.533520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.624 [2024-11-28 13:09:43.533526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.624 [2024-11-28 13:09:43.544779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.624 [2024-11-28 13:09:43.544797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.544803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.554974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.554993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.554999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.563207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.563226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.563232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.572489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.572508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.572515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.583507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.583525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.583531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.590850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.590868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.590875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.600372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.600390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.600397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.611526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.611544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.611550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.624495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.624514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.624525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.635830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.635847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.635854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.647614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.647632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.647639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.658407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.658425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.669743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.669762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.669768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.681867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.681885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.681892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.693351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.693369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.693376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.705575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.705593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.705599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.715622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.715640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.715646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.721726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.721744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.721750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.726344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.726361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.726368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.729430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.729448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.729455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.625 3614.00 IOPS, 451.75 MiB/s [2024-11-28T12:09:43.752Z] [2024-11-28 13:09:43.739017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.739035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.739042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.625 [2024-11-28 13:09:43.743459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.625 [2024-11-28 13:09:43.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.625 [2024-11-28 13:09:43.743483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.751551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.751570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.751576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.759779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.759798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.759804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.765676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.765694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.765700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.770431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.770450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.770459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.780800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.780819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.780825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.789208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.789225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.789232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.794341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.794359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.794365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.798908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.798925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.798931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.803362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.807949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.807967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.807973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.812391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.812410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.812416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.818879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.818898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.818904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.823157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.823183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.823189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.834034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.834052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.834059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.843359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.843377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.843383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.854576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.854595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.854601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.865414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.865433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.865439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.877193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.877211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.877217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.883032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.883050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.883057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.889980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.889999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.890005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.887 [2024-11-28 13:09:43.901275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.887 [2024-11-28 13:09:43.901293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.887 [2024-11-28 13:09:43.901299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.910838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.910857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.910863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.919462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.919481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.919487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.929679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.929697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.929703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.940223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.940241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.940247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.950467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.950486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.950492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.962227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.962245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.962251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.972658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.972676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.972682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.982306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.982324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.982331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.991492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.991510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.991520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:43.998254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:43.998271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:43.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:13.888 [2024-11-28 13:09:44.006520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:13.888 [2024-11-28 13:09:44.006538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.888 [2024-11-28 13:09:44.006545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.015533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.015552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.015558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.023004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.023023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.023029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.028821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.028839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.028846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.037073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.037091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.037097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.044325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.044343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.044350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.053235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.053252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.053259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.061432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.061453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.061460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.066822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.066841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.066847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.071188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.071206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.071212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.078242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.078261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.078267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.086609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.086628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.086634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.091036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.091054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.091060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.095345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.095363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.095370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.100951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.149 [2024-11-28 13:09:44.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.149 [2024-11-28 13:09:44.100974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.149 [2024-11-28 13:09:44.105519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.105537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.105550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.115826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.115844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.115850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.125300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.125318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.125324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.135283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.135301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.135308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.144260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.144278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.144284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.153710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.153728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.153735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.163518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.163536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.163542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.171019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.171037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.171043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.179766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.179790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.190262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.190283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.190290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.198916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.198934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.198941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.209666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.209684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.209690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.221452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.221471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.221477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.231830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.231848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.231854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.243285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.243304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.243311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.253704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.253722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.253728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.150 [2024-11-28 13:09:44.262208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.150 [2024-11-28 13:09:44.262227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.150 [2024-11-28 13:09:44.262233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.273932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.273951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.411 [2024-11-28 13:09:44.273957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.281545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.281563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.411 [2024-11-28 13:09:44.281569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.292738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.292756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.411 [2024-11-28 13:09:44.292763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.302529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.302547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.411 [2024-11-28 13:09:44.302553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.312176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.312195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.411 [2024-11-28 13:09:44.312201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.323504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.323523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.411 [2024-11-28 13:09:44.323533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.411 [2024-11-28 13:09:44.334800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.411 [2024-11-28 13:09:44.334819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.334825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.347466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.347484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.347491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.359789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.359807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.359813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.372459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.372477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.372486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.384815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.384840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.397065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.397084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.397090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.409352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.409369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.409375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.421781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.421799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.421805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.434345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.434363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.434370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.446399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.446418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.446424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.458036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.458054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.458060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.470345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.470363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.470369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.482012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.482034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.482040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.493830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.493847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.493854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.505890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.505909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.505915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.516654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.516673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.516680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.526662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.526680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.526686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.412 [2024-11-28 13:09:44.536338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.412 [2024-11-28 13:09:44.536357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.412 [2024-11-28 13:09:44.536363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.545870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.545888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.545895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.556765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.556784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.556792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.568370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.568389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.568395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.580214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.580232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.580238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.592694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.592713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.592719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.602292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.602310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.602317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.610341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.610363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.610369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.621327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.621345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.621351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.631625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.631643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.631649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.674 [2024-11-28 13:09:44.639554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.674 [2024-11-28 13:09:44.639571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.674 [2024-11-28 13:09:44.639578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.650383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.650402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.650408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.659900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.659919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.659928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.670141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.670163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.670170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.679305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.679323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.679329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.689262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.689281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.689287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.699527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.699545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.699551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.706893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.706912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.706918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.715878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.715896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.715902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:14.675 [2024-11-28 13:09:44.725539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.725557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.725563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:14.675 3485.00 IOPS, 435.62 MiB/s [2024-11-28T12:09:44.802Z] [2024-11-28 13:09:44.736814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196c2d0) 00:39:14.675 [2024-11-28 13:09:44.736831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.675 [2024-11-28 13:09:44.736837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:14.675 00:39:14.675 Latency(us) 00:39:14.675 [2024-11-28T12:09:44.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.675 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:14.675 nvme0n1 : 2.00 3488.15 436.02 0.00 0.00 4583.02 533.73 19268.85 00:39:14.675 [2024-11-28T12:09:44.802Z] =================================================================================================================== 00:39:14.675 [2024-11-28T12:09:44.802Z] Total : 3488.15 436.02 0.00 0.00 4583.02 533.73 19268.85 00:39:14.675 { 00:39:14.675 "results": [ 00:39:14.675 { 00:39:14.675 "job": "nvme0n1", 00:39:14.675 "core_mask": "0x2", 00:39:14.675 "workload": "randread", 00:39:14.675 "status": "finished", 00:39:14.675 "queue_depth": 16, 00:39:14.675 "io_size": 131072, 00:39:14.675 "runtime": 2.00278, 00:39:14.675 "iops": 3488.151469457454, 00:39:14.675 "mibps": 436.01893368218174, 00:39:14.675 "io_failed": 0, 00:39:14.675 "io_timeout": 0, 00:39:14.675 "avg_latency_us": 4583.016407498783, 00:39:14.675 "min_latency_us": 533.7253591714, 00:39:14.675 "max_latency_us": 19268.853992649514 00:39:14.675 } 00:39:14.675 ], 00:39:14.675 "core_count": 1 00:39:14.675 } 00:39:14.675 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:14.675 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:14.675 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:14.675 | .driver_specific 00:39:14.675 | .nvme_error 00:39:14.675 | .status_code 00:39:14.675 | .command_transient_transport_error' 00:39:14.675 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3666478 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3666478 ']' 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3666478 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:14.938 13:09:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666478 00:39:14.938 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:14.938 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:14.938 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666478' 00:39:14.938 killing process with pid 3666478 00:39:14.938 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3666478 00:39:14.938 Received shutdown signal, test time was about 2.000000 seconds 00:39:14.938 00:39:14.938 Latency(us) 00:39:14.938 [2024-11-28T12:09:45.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.938 [2024-11-28T12:09:45.065Z] =================================================================================================================== 00:39:14.938 [2024-11-28T12:09:45.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:14.938 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3666478 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3667203 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3667203 /var/tmp/bperf.sock 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3667203 ']' 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:15.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:15.198 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:15.198 [2024-11-28 13:09:45.155150] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:15.198 [2024-11-28 13:09:45.155212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667203 ] 00:39:15.198 [2024-11-28 13:09:45.287600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:15.459 [2024-11-28 13:09:45.342452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.459 [2024-11-28 13:09:45.358681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.031 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.031 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:39:16.031 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:16.031 13:09:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:16.031 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:16.031 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.031 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:16.031 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.031 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:16.031 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:16.292 nvme0n1 00:39:16.292 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:16.292 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.292 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:16.292 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.292 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:16.292 13:09:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:16.554 Running I/O for 2 seconds... 00:39:16.554 [2024-11-28 13:09:46.467243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:16.554 [2024-11-28 13:09:46.468169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.468195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.476035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efa7d8 00:39:16.554 [2024-11-28 13:09:46.476981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.476999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.484687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef8618 00:39:16.554 [2024-11-28 13:09:46.485618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.485636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.493281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:16.554 [2024-11-28 13:09:46.494215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.494232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.501844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4298 00:39:16.554 [2024-11-28 13:09:46.502776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.502792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.510422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef1868 00:39:16.554 [2024-11-28 13:09:46.511327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.511343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.518976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eef6a8 00:39:16.554 [2024-11-28 13:09:46.519903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.519918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.527539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eed4e8 00:39:16.554 [2024-11-28 13:09:46.528461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.528480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.536114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeb328 00:39:16.554 [2024-11-28 13:09:46.537032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.537048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.554 [2024-11-28 13:09:46.544681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efc128 00:39:16.554 [2024-11-28 13:09:46.545607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.554 [2024-11-28 13:09:46.545622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.553230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef9f68 00:39:16.555 [2024-11-28 13:09:46.554122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.554138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.561760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef7da8 00:39:16.555 [2024-11-28 13:09:46.562676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.562692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.570297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef5be8 00:39:16.555 [2024-11-28 13:09:46.571094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.571109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.579504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef20d8 00:39:16.555 [2024-11-28 13:09:46.580654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.580669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.587453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eecc78 00:39:16.555 [2024-11-28 13:09:46.588238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.588255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.595885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eedd58 00:39:16.555 [2024-11-28 13:09:46.596699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.596715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.604433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeee38 00:39:16.555 [2024-11-28 13:09:46.605202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.605218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.612966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef8618 00:39:16.555 [2024-11-28 13:09:46.613772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.613789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.622595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:16.555 [2024-11-28 13:09:46.623857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.623872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.630180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4f40 00:39:16.555 [2024-11-28 13:09:46.630839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.630854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.639000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee0ea0 00:39:16.555 [2024-11-28 13:09:46.639925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.639941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.647549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016edfdc0 00:39:16.555 [2024-11-28 13:09:46.648433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.656109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016edece0 00:39:16.555 [2024-11-28 13:09:46.657031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.657047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.664631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eddc00 00:39:16.555 [2024-11-28 13:09:46.665533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.665549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.555 [2024-11-28 13:09:46.673166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee7c50 00:39:16.555 [2024-11-28 13:09:46.674095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.555 [2024-11-28 13:09:46.674111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.681694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee8d30 00:39:16.817 [2024-11-28 13:09:46.682613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.682630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.690234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef7100 00:39:16.817 [2024-11-28 13:09:46.691147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.691167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.698859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6cc8 00:39:16.817 [2024-11-28 13:09:46.699785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.699801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.707375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef5be8 00:39:16.817 [2024-11-28 13:09:46.708181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.708197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.715900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:16.817 [2024-11-28 13:09:46.716817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.716832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.724441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eebfd0 00:39:16.817 [2024-11-28 13:09:46.725347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.725362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.732967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eed0b0 00:39:16.817 [2024-11-28 13:09:46.733886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.733901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.741501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eee190 00:39:16.817 [2024-11-28 13:09:46.742408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.742424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.750017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eef270 00:39:16.817 [2024-11-28 13:09:46.750949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.750968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.758527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef81e0 00:39:16.817 [2024-11-28 13:09:46.759434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.759450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.767055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee6b70 00:39:16.817 [2024-11-28 13:09:46.767923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.767939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.775599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee12d8 00:39:16.817 [2024-11-28 13:09:46.776504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.776520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.784124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee01f8 00:39:16.817 [2024-11-28 13:09:46.785057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.785073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.792675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016edf118 00:39:16.817 [2024-11-28 13:09:46.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.793605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.801190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ede038 00:39:16.817 [2024-11-28 13:09:46.802074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.817 [2024-11-28 13:09:46.802090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.817 [2024-11-28 13:09:46.809699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee7818 00:39:16.818 [2024-11-28 13:09:46.810606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.810622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.818249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee88f8 00:39:16.818 [2024-11-28 13:09:46.819172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.819188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.826785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efc998 00:39:16.818 [2024-11-28 13:09:46.827683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.827699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.835314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6020 00:39:16.818 [2024-11-28 13:09:46.836247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.836263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.843895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4f40 00:39:16.818 [2024-11-28 13:09:46.844801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.844816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.852396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3e60 00:39:16.818 [2024-11-28 13:09:46.853300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.853315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.860939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeb328 00:39:16.818 [2024-11-28 13:09:46.861851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.861867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.870569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eec408 00:39:16.818 [2024-11-28 13:09:46.871944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.871960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.878703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efc998 00:39:16.818 [2024-11-28 13:09:46.879675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.879692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.887272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:16.818 [2024-11-28 13:09:46.888224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.888240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.895824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee2c28 00:39:16.818 [2024-11-28 13:09:46.896824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.896839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.904368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef2948 00:39:16.818 [2024-11-28 13:09:46.905319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.905335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.912908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee5220 00:39:16.818 [2024-11-28 13:09:46.913913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.913928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.921471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efc998 00:39:16.818 [2024-11-28 13:09:46.922485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.922501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.931109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:16.818 [2024-11-28 13:09:46.932608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.932624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:16.818 [2024-11-28 13:09:46.937258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee27f0 00:39:16.818 [2024-11-28 13:09:46.937959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.818 [2024-11-28 13:09:46.937975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.945365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.080 [2024-11-28 13:09:46.946037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.946053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.954757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016edece0 00:39:17.080 [2024-11-28 13:09:46.955601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.955616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.963459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.080 [2024-11-28 13:09:46.964295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.964310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.971989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.080 [2024-11-28 13:09:46.972825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.972848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.980503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.080 [2024-11-28 13:09:46.981346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.981363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.989023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.080 [2024-11-28 13:09:46.989874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.989890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:46.997546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.080 [2024-11-28 13:09:46.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:46.998389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:47.006093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.080 [2024-11-28 13:09:47.006933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:47.006948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:47.014631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.080 [2024-11-28 13:09:47.015484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:47.015500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:47.023157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.080 [2024-11-28 13:09:47.024014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:47.024030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:47.031659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.080 [2024-11-28 13:09:47.032510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:47.032527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:47.040223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.080 [2024-11-28 13:09:47.041065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.080 [2024-11-28 13:09:47.041081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.080 [2024-11-28 13:09:47.048767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.080 [2024-11-28 13:09:47.049616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.057294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.081 [2024-11-28 13:09:47.058145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.058164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.065815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.081 [2024-11-28 13:09:47.066668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.066684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.074341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.081 [2024-11-28 13:09:47.075195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.075211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.082857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.081 [2024-11-28 13:09:47.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.083722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.091408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.081 [2024-11-28 13:09:47.092226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.092242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.099936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.081 [2024-11-28 13:09:47.100788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.100805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.108475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.081 [2024-11-28 13:09:47.109322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.109337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.116995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.081 [2024-11-28 13:09:47.117853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.117868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.125503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.081 [2024-11-28 13:09:47.126312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.126327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.134013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.081 [2024-11-28 13:09:47.134859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.134875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.142556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.081 [2024-11-28 13:09:47.143361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.143377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.151080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.081 [2024-11-28 13:09:47.151916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.151931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.159610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.081 [2024-11-28 13:09:47.160414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.160429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.168110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.081 [2024-11-28 13:09:47.168962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.168978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.176618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.081 [2024-11-28 13:09:47.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.177478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.185124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.081 [2024-11-28 13:09:47.185969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.185984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.193654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.081 [2024-11-28 13:09:47.194481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.194497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.081 [2024-11-28 13:09:47.202168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.081 [2024-11-28 13:09:47.203021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.081 [2024-11-28 13:09:47.203036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.210712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.343 [2024-11-28 13:09:47.211543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.211559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.219229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.343 [2024-11-28 13:09:47.220070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.220086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.227765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.343 [2024-11-28 13:09:47.228612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.228628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.236300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.343 [2024-11-28 13:09:47.237130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.237145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.244837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.343 [2024-11-28 13:09:47.245670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.245686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.253361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.343 [2024-11-28 13:09:47.254193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.261867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.343 [2024-11-28 13:09:47.262720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.262735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.270359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.343 [2024-11-28 13:09:47.271092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.343 [2024-11-28 13:09:47.271110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.343 [2024-11-28 13:09:47.278884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.343 [2024-11-28 13:09:47.279681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.279697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.287390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.344 [2024-11-28 13:09:47.288242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.288258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.295907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.344 [2024-11-28 13:09:47.296756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.296772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.304432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.344 [2024-11-28 13:09:47.305258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.305274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.312923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.344 [2024-11-28 13:09:47.313781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.313797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.321448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.344 [2024-11-28 13:09:47.322241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.329967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.344 [2024-11-28 13:09:47.330817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.330833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.338474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.344 [2024-11-28 13:09:47.339305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.339321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.347002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.344 [2024-11-28 13:09:47.347858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.347874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.355502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.344 [2024-11-28 13:09:47.356223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.363995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.344 [2024-11-28 13:09:47.364844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.364860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.372516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.344 [2024-11-28 13:09:47.373357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.373372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.381031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.344 [2024-11-28 13:09:47.381884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.381900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.389539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.344 [2024-11-28 13:09:47.390359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.390375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.398040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:17.344 [2024-11-28 13:09:47.398888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.398904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.406541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.344 [2024-11-28 13:09:47.407367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.407383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.415061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:17.344 [2024-11-28 13:09:47.415898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.415914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.423580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:17.344 [2024-11-28 13:09:47.424432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.424448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.432091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:17.344 [2024-11-28 13:09:47.432881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.432897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.440617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:17.344 [2024-11-28 13:09:47.441447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.441463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.449118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee73e0 00:39:17.344 [2024-11-28 13:09:47.449969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.449985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 [2024-11-28 13:09:47.457612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee3d08 00:39:17.344 [2024-11-28 13:09:47.458744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.458759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:17.344 29789.00 IOPS, 116.36 MiB/s [2024-11-28T12:09:47.471Z] [2024-11-28 13:09:47.466405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:17.344 [2024-11-28 13:09:47.467469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.344 [2024-11-28 13:09:47.467485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.474390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.475136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.475152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.482811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.483565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.483580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.491329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.492078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.492097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.499840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.500572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.500588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.508363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.509102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.509117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.516887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.517633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.517649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.525424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.526161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.526177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.533941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.534668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.534684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.542443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.543167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.543182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.550945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.551675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.551691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.559463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.607 [2024-11-28 13:09:47.560191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.607 [2024-11-28 13:09:47.560207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.607 [2024-11-28 13:09:47.567989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.568681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.568697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.576538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.577272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.577288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.585054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.585818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.593567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.594319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.594335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.602098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.602849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.602865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.610605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.611341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.611356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.619124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.619851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.619867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.627656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.628357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.628374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.636161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.636892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.636907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.644656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.645414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.645430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.653196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.653926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.653941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.661705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.662410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.662425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.670226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.670913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.670928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.678806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.679537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.679552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.687312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.688036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.688052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.695836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.696569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.696584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.704444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.705187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.705204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.712966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.713721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.713739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.721514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.722225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.722241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.608 [2024-11-28 13:09:47.730014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.608 [2024-11-28 13:09:47.730734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.608 [2024-11-28 13:09:47.730750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.738531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.739218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.739234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.747070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.747812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.747828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.755600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.756335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.756351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.764115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.764849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.764865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.772612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.773356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.773371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.781108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.781860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.781875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.789661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.790374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.790390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.798192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.798933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.798948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.806709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.807421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.807437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.815224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.815948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.815963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.823711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.824439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.870 [2024-11-28 13:09:47.824455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.870 [2024-11-28 13:09:47.832210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.870 [2024-11-28 13:09:47.832905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.832920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.840740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.841467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.841483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.849259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.850003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.850018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.857774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.858522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.858538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.866290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.867029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.874789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.875520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.875536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.883303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.884173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.884188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.891962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.892695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.892711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.900493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.901225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.901241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.909001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.909738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.909754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.918591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:17.871 [2024-11-28 13:09:47.919774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.919789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.926134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efc128 00:39:17.871 [2024-11-28 13:09:47.926732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.926748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.935978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3e60 00:39:17.871 [2024-11-28 13:09:47.937044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.937063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.943167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eed4e8 00:39:17.871 [2024-11-28 13:09:47.943946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.943961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.951702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee6fa8 00:39:17.871 [2024-11-28 13:09:47.952411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.952426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.960216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee4578 00:39:17.871 [2024-11-28 13:09:47.960971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.960986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.968712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3a28 00:39:17.871 [2024-11-28 13:09:47.969414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.969430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.977251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ede038 00:39:17.871 [2024-11-28 13:09:47.977987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.978002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.985763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee1710 00:39:17.871 [2024-11-28 13:09:47.986512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.871 [2024-11-28 13:09:47.986528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:17.871 [2024-11-28 13:09:47.994300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016edece0 00:39:18.133 [2024-11-28 13:09:47.995038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:47.995053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.002819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee8d30 00:39:18.133 [2024-11-28 13:09:48.003559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.003575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.011322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef1868 00:39:18.133 [2024-11-28 13:09:48.012083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.012098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.019830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eedd58 00:39:18.133 [2024-11-28 13:09:48.020564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.020579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.028363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efda78 00:39:18.133 [2024-11-28 13:09:48.029114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.029129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.036902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef31b8 00:39:18.133 [2024-11-28 13:09:48.037644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.037659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.045446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef1430 00:39:18.133 [2024-11-28 13:09:48.046187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.046202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.053933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef4b08 00:39:18.133 [2024-11-28 13:09:48.054671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.054687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.062425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6458 00:39:18.133 [2024-11-28 13:09:48.063139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.063154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.070936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeff18 00:39:18.133 [2024-11-28 13:09:48.071673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.071688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.079453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef7100 00:39:18.133 [2024-11-28 13:09:48.080145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.080162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.087957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eed4e8 00:39:18.133 [2024-11-28 13:09:48.088693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.088709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.096481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee6fa8 00:39:18.133 [2024-11-28 13:09:48.097096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.097112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.105270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3e60 00:39:18.133 [2024-11-28 13:09:48.106120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.106136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.113944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeb328 00:39:18.133 [2024-11-28 13:09:48.114763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.114779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.122458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3a28 00:39:18.133 [2024-11-28 13:09:48.123328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.133 [2024-11-28 13:09:48.123343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.133 [2024-11-28 13:09:48.130995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef8a50 00:39:18.134 [2024-11-28 13:09:48.131849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.131864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.139533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ede038 00:39:18.134 [2024-11-28 13:09:48.140391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.140406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.148063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efe2e8 00:39:18.134 [2024-11-28 13:09:48.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.156594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee1710 00:39:18.134 [2024-11-28 13:09:48.157449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.157468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.165138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee0630 00:39:18.134 [2024-11-28 13:09:48.166010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.166025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.173676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eea680 00:39:18.134 [2024-11-28 13:09:48.174531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.174548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.182221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee5ec8 00:39:18.134 [2024-11-28 13:09:48.183083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.183099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.190743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee1f80 00:39:18.134 [2024-11-28 13:09:48.191606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.191623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.199259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee9e10 00:39:18.134 [2024-11-28 13:09:48.200127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.200143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.207808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef2510 00:39:18.134 [2024-11-28 13:09:48.208671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.208686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.216330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efd208 00:39:18.134 [2024-11-28 13:09:48.217145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.217165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.224856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee9168 00:39:18.134 [2024-11-28 13:09:48.225675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.225691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.233374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef2948 00:39:18.134 [2024-11-28 13:09:48.234232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.234248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.241889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee99d8 00:39:18.134 [2024-11-28 13:09:48.242755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.242771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.134 [2024-11-28 13:09:48.250457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efb8b8 00:39:18.134 [2024-11-28 13:09:48.251323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.134 [2024-11-28 13:09:48.251339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.258991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeea00 00:39:18.396 [2024-11-28 13:09:48.259868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.259884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.267535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef35f0 00:39:18.396 [2024-11-28 13:09:48.268367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.268383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.276072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef8e88 00:39:18.396 [2024-11-28 13:09:48.276948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.276964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.284593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ede470 00:39:18.396 [2024-11-28 13:09:48.285446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.285461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.293112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efdeb0 00:39:18.396 [2024-11-28 13:09:48.293929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.293945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.301648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee7c50 00:39:18.396 [2024-11-28 13:09:48.302527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.302543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.310181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eec408 00:39:18.396 [2024-11-28 13:09:48.311051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.311067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.318705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef81e0 00:39:18.396 [2024-11-28 13:09:48.319565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.319581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.327220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee6b70 00:39:18.396 [2024-11-28 13:09:48.328073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.328089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.335726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee0ea0 00:39:18.396 [2024-11-28 13:09:48.336587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.336603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.344257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee5658 00:39:18.396 [2024-11-28 13:09:48.345121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.345138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.352783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef96f8 00:39:18.396 [2024-11-28 13:09:48.353649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.353665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.361331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eef6a8 00:39:18.396 [2024-11-28 13:09:48.362178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.362194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.396 [2024-11-28 13:09:48.369850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efef90 00:39:18.396 [2024-11-28 13:09:48.370724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.396 [2024-11-28 13:09:48.370740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.378363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef6890 00:39:18.397 [2024-11-28 13:09:48.379217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.379239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.386874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3e60 00:39:18.397 [2024-11-28 13:09:48.387735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.387751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.395421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016eeb328 00:39:18.397 [2024-11-28 13:09:48.396277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.396293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.403960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef3a28 00:39:18.397 [2024-11-28 13:09:48.404821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.404837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.412492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef8a50 00:39:18.397 [2024-11-28 13:09:48.413353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.413369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.421021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ede038 00:39:18.397 [2024-11-28 13:09:48.421883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.421899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.429546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efe2e8 00:39:18.397 [2024-11-28 13:09:48.430415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.430431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.438079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ee1710 00:39:18.397 [2024-11-28 13:09:48.438927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.438943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.446906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016efa7d8 00:39:18.397 [2024-11-28 13:09:48.447488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.447504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.397 [2024-11-28 13:09:48.456657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3a50) with pdu=0x200016ef2d80 00:39:18.397 [2024-11-28 13:09:48.458555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.397 [2024-11-28 13:09:48.458571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.397 29819.00 IOPS, 116.48 MiB/s 00:39:18.397 Latency(us) 00:39:18.397 [2024-11-28T12:09:48.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.397 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:18.397 nvme0n1 : 2.00 29849.67 116.60 0.00 0.00 4284.15 1717.50 9743.91 00:39:18.397 [2024-11-28T12:09:48.524Z] =================================================================================================================== 00:39:18.397 [2024-11-28T12:09:48.524Z] Total : 29849.67 116.60 0.00 0.00 4284.15 1717.50 9743.91 00:39:18.397 { 00:39:18.397 "results": [ 00:39:18.397 { 00:39:18.397 "job": "nvme0n1", 00:39:18.397 "core_mask": "0x2", 00:39:18.397 "workload": "randwrite", 00:39:18.397 "status": "finished", 00:39:18.397 "queue_depth": 128, 00:39:18.397 "io_size": 4096, 00:39:18.397 "runtime": 2.002233, 00:39:18.397 "iops": 29849.672840273834, 00:39:18.397 "mibps": 116.60028453231966, 00:39:18.397 "io_failed": 0, 00:39:18.397 "io_timeout": 0, 00:39:18.397 "avg_latency_us": 4284.146229768197, 00:39:18.397 "min_latency_us": 1717.5008352823254, 00:39:18.397 "max_latency_us": 9743.909121282994 00:39:18.397 } 00:39:18.397 ], 00:39:18.397 "core_count": 1 00:39:18.397 } 00:39:18.397 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:18.397 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:18.397 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:18.397 | .driver_specific 00:39:18.397 | .nvme_error 00:39:18.397 | .status_code 00:39:18.397 | .command_transient_transport_error' 00:39:18.397 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3667203 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3667203 ']' 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3667203 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3667203 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3667203' 00:39:18.659 killing process with pid 3667203 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3667203 00:39:18.659 Received shutdown signal, test time was about 2.000000 seconds 00:39:18.659 00:39:18.659 Latency(us) 00:39:18.659 [2024-11-28T12:09:48.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.659 [2024-11-28T12:09:48.786Z] =================================================================================================================== 00:39:18.659 [2024-11-28T12:09:48.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:18.659 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3667203 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3667942 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3667942 /var/tmp/bperf.sock 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3667942 ']' 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:18.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.920 13:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:18.920 [2024-11-28 13:09:48.871544] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:18.920 [2024-11-28 13:09:48.871603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667942 ] 00:39:18.920 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:18.920 Zero copy mechanism will not be used. 00:39:18.920 [2024-11-28 13:09:49.003832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:19.181 [2024-11-28 13:09:49.058239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:19.181 [2024-11-28 13:09:49.074331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:19.752 13:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:20.013 nvme0n1 00:39:20.013 13:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:20.013 13:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.013 13:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:20.013 13:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.013 13:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:20.013 13:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:20.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:20.276 Zero copy mechanism will not be used. 00:39:20.276 Running I/O for 2 seconds... 00:39:20.276 [2024-11-28 13:09:50.199123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.199288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.199315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.204501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.204575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.204593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.208466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.208527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.208544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.212090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.212173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.212189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.215652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.215709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.215725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.220553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.220625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.220641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.226523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.226644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.226665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.233937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.234013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.234029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.239851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.239909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.239925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.246701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.246777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.246793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.251242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.251297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.251313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.259233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.259431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.259449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.265085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.265167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.265183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.271276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.271334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.271349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.275292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.275352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.275367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.279263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.279329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.279345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.283419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.283493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.283509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.287129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.287206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.287221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.276 [2024-11-28 13:09:50.290670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.276 [2024-11-28 13:09:50.290736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.276 [2024-11-28 13:09:50.290751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.294477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.294534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.294549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.299259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.299333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.299349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.302637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.302695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.302710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.309200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.309254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.309270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.313330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.313583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.313598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.317549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.317605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.317621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.321545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.321591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.321606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.326221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.326287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.326302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.330594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.330670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.330686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.334691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.334757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.334773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.339063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.339117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.339132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.345265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.345329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.345345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.350433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.350487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.350502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.355912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.355971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.355990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.362367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.362596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.371357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.371742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.371759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.382274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.382525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.382540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.277 [2024-11-28 13:09:50.393480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.277 [2024-11-28 13:09:50.393759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.277 [2024-11-28 13:09:50.393774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.403959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.404201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.415129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.415344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.415360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.426369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.426602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.436185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.436468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.436485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.446376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.446696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.446712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.456935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.457210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.457226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.467796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.468010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.468025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.477706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.477934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.477949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.486345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.486508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.486524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.496064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.496305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.496321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.505729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.505917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.505933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.510837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.510988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.511003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.513917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.514108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.541 [2024-11-28 13:09:50.514124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.541 [2024-11-28 13:09:50.517365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.541 [2024-11-28 13:09:50.517539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.517555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.520721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.520893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.520909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.523908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.524076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.524093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.526869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.527056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.527072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.529791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.529964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.529980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.532674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.532922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.532939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.536655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.536826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.536841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.539714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.539883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.539899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.542655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.542818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.542841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.545673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.545841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.545858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.549164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.549373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.549390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.558786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.558951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.558966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.562714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.562890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.562906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.569863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.569987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.570002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.572760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.572880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.572896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.575607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.575731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.578412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.578535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.578550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.581164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.581302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.581317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.584059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.584202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.584217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.586891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.587028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.587044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.589579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.589715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.589730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.592245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.592375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.592390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.594878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.595019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.597480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.597616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.597632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.600088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.600227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.600242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.602658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.602797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.602814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.605733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.605895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.613004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.613213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.613229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.622892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.623109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.623125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.629475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.629618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.629633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.634108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.634257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.634273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.637730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.637856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.637871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.640925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.641045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.542 [2024-11-28 13:09:50.641061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.542 [2024-11-28 13:09:50.644234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.542 [2024-11-28 13:09:50.644322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.543 [2024-11-28 13:09:50.644338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.543 [2024-11-28 13:09:50.647376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.543 [2024-11-28 13:09:50.647567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.543 [2024-11-28 13:09:50.647586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.543 [2024-11-28 13:09:50.654701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.543 [2024-11-28 13:09:50.654949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.543 [2024-11-28 13:09:50.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.543 [2024-11-28 13:09:50.660908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.543 [2024-11-28 13:09:50.661043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.543 [2024-11-28 13:09:50.661059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.664432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.664568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.664585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.668110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.668261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.668277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.671540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.671685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.671702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.676578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.676720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.676736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.680178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.680320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.680337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.683861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.684005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.684021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.687805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.687944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.687960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.691369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.691511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.691528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.694985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.695156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.695178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.700654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.700792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.700808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.704020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.704169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.704187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.707845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.707990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.708005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.711274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.711410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.714958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.806 [2024-11-28 13:09:50.715153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.806 [2024-11-28 13:09:50.715174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.806 [2024-11-28 13:09:50.718863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.719005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.719022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.723411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.723586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.723603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.730311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.730595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.736792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.737041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.737057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.741116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.741257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.741274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.744577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.744720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.744737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.749621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.749765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.749780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.753121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.753264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.753280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.757018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.757154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.757179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.761098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.761240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.761259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.764411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.764550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.764568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.767815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.767956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.767973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.771201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.771344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.771364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.776313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.776532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.776548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.779824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.779964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.779981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.786415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.786557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.786573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.789819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.790014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.790031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.793360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.793500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.793517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.796803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.796947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.796963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.802860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.803187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.803204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.808938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.809074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.809091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.812596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.812744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.812760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.815710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.815851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.815867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.818608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.818747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.818763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.821733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.821873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.821889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.824707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.824847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.824864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.828082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.828228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.807 [2024-11-28 13:09:50.828244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.807 [2024-11-28 13:09:50.830817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.807 [2024-11-28 13:09:50.830946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.830962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.833448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.833580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.833596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.836078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.836212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.836228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.838733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.838863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.838878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.841534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.841664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.841681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.844872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.844998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.845015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.848230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.848377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.850812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.850939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.850957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.853462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.853591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.853611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.856103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.856236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.856253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.859049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.859179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.859196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.861685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.861816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.861832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.864303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.864432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.864448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.867259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.867389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.867405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.871886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.872013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.872028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.875771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.875902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.875917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.878373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.878503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.878519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.881029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.881166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.881183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.883668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.883795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.883812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.890277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.890596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.890612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.894310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.894463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.898362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.898489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.898506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.900966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.901097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.901113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.903609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.903741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.903757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.906472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.906661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.906678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.909820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.909951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.909967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.912490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.912623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.808 [2024-11-28 13:09:50.912639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.808 [2024-11-28 13:09:50.915099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.808 [2024-11-28 13:09:50.915234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.809 [2024-11-28 13:09:50.915250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:20.809 [2024-11-28 13:09:50.917919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.809 [2024-11-28 13:09:50.918061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.809 [2024-11-28 13:09:50.918077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:20.809 [2024-11-28 13:09:50.921272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.809 [2024-11-28 13:09:50.921402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.809 [2024-11-28 13:09:50.921418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:20.809 [2024-11-28 13:09:50.923963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:20.809 [2024-11-28 13:09:50.924091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.809 [2024-11-28 13:09:50.924107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:20.809 [2024-11-28 13:09:50.929689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.070 [2024-11-28 13:09:50.929814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.070 [2024-11-28 13:09:50.929831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.070 [2024-11-28 13:09:50.933421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.070 [2024-11-28 13:09:50.933554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.070 [2024-11-28 13:09:50.933570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.070 [2024-11-28 13:09:50.936064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.070 [2024-11-28 13:09:50.936200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.070 [2024-11-28 13:09:50.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.070 [2024-11-28 13:09:50.938676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.070 [2024-11-28 13:09:50.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.070 [2024-11-28 13:09:50.938825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.070 [2024-11-28 13:09:50.941319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.941452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.941468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.943949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.944078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.944094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.946555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.946683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.946699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.949250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.949381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.949396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.952027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.952164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.952180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.954622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.954751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.954768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.957200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.957331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.957346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.959789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.959918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.959934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.962504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.962636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.962652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.965272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.965397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.965414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.970081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.970245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.970262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.977729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.977779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.977795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.983521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.983591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.983607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.987200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.987251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.987266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.993358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.993411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.993426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.996918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.996962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.996977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:50.999574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:50.999619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:50.999634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:51.002555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:51.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:51.002615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:51.005659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:51.005712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:51.005728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:51.008601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:51.008650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.071 [2024-11-28 13:09:51.008666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.071 [2024-11-28 13:09:51.013716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.071 [2024-11-28 13:09:51.013990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.014005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.019056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.019117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.019132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.022130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.022226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.022242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.027123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.027209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.027224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.030408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.030452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.030467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.033560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.033618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.033637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.036926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.036992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.037007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.040023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.040081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.040097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.042944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.043009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.043024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.045843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.045890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.045905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.048515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.048586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.048600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.051170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.051214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.051229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.053800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.053843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.053858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.056478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.056541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.056556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.059263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.059326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.059341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.062519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.062636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.062653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.071787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.071856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.071871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.081878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.082182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.082198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.092202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.092469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.092484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.099606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.099684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.099700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.103560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.103628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.103643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.106295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.106351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.106367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.072 [2024-11-28 13:09:51.109007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.072 [2024-11-28 13:09:51.109083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.072 [2024-11-28 13:09:51.109098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.111739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.111806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.111821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.114440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.114493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.114508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.117215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.117278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.117294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.120303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.120358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.120373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.123122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.123182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.123197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.125715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.125771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.125786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.128295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.128348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.128363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.130900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.130961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.130976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.133733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.133789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.133807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.138224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.138502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.138518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.144738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.144804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.144819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.147599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.147669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.153616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.153679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.153695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.158273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.158351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.158367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.160920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.160965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.160980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.164190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.164303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.164318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.167289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.167340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.167355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.170248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.170298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.170313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.173168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.173234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.173249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.176056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.176110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.176125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.179014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.179056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.179070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.181703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.073 [2024-11-28 13:09:51.181753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.073 [2024-11-28 13:09:51.181768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.073 [2024-11-28 13:09:51.184646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.074 [2024-11-28 13:09:51.184691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.074 [2024-11-28 13:09:51.184706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.074 [2024-11-28 13:09:51.187559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.074 [2024-11-28 13:09:51.187616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.074 [2024-11-28 13:09:51.187631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.074 [2024-11-28 13:09:51.190511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.074 [2024-11-28 13:09:51.190557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.074 [2024-11-28 13:09:51.190572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.074 [2024-11-28 13:09:51.193137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.074 [2024-11-28 13:09:51.193192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.074 [2024-11-28 13:09:51.193207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.335 7166.00 IOPS, 895.75 MiB/s [2024-11-28T12:09:51.462Z] [2024-11-28 13:09:51.196740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.335 [2024-11-28 13:09:51.196793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.335 [2024-11-28 13:09:51.196809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.335 [2024-11-28 13:09:51.199317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.335 [2024-11-28 13:09:51.199367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.335 [2024-11-28 13:09:51.199382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.335 [2024-11-28 13:09:51.201901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.335 [2024-11-28 13:09:51.201949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.335 [2024-11-28 13:09:51.201964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.335 [2024-11-28 13:09:51.204483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.335 [2024-11-28 13:09:51.204531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.335 [2024-11-28 13:09:51.204546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.335 [2024-11-28 13:09:51.207062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.335 [2024-11-28 13:09:51.207109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.207125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.209622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.209669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.209684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.212197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.212247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.212262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.214751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.214798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.214813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.217300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.217341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.217360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.219868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.219919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.219935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.222413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.222475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.222491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.225318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.225411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.225427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.230591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.230843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.230859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.240513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.240762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.250723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.250963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.250978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.261496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.261701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.261716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.272390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.272630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.272645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.282726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.282999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.283014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.292563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.292829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.292844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.302054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.302284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.302299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.309053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.309140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.309156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.317838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.318074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.318089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.325071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.325179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.325194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.328688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.328764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.328779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.331454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.331523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.331538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.334245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.336 [2024-11-28 13:09:51.334300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.336 [2024-11-28 13:09:51.334319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.336 [2024-11-28 13:09:51.337366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.337426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.337441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.340299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.340349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.342933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.342978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.342992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.345560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.345610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.345625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.348182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.348247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.350964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.351045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.351060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.354440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.354505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.354520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.357880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.357945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.357961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.367839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.367945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.367960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.378299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.378470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.378485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.387859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.388026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.388041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.398484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.398732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.398748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.408289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.408557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.408572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.418028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.418345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.418361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.427442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.427704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.427719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.432169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.432239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.435746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.435825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.435840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.438874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.438917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.438932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.442757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.442801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.442816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.445939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.445983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.445998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.448835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.337 [2024-11-28 13:09:51.448897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.337 [2024-11-28 13:09:51.448912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.337 [2024-11-28 13:09:51.451743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.338 [2024-11-28 13:09:51.451793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.338 [2024-11-28 13:09:51.451808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.338 [2024-11-28 13:09:51.454527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.338 [2024-11-28 13:09:51.454572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.338 [2024-11-28 13:09:51.454587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.338 [2024-11-28 13:09:51.457352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.338 [2024-11-28 13:09:51.457403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.338 [2024-11-28 13:09:51.457418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.598 [2024-11-28 13:09:51.459975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.598 [2024-11-28 13:09:51.460019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.598 [2024-11-28 13:09:51.460034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.598 [2024-11-28 13:09:51.462576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.598 [2024-11-28 13:09:51.462625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.598 [2024-11-28 13:09:51.462643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.598 [2024-11-28 13:09:51.465253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.465298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.465313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.468230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.468299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.468314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.471038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.471094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.471109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.473616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.473661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.473677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.476185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.476237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.476252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.478881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.478948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.478963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.482210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.482323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.482339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.492042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.492117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.492132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.500554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.500817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.500845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.508413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.508631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.508647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.512702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.512759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.512774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.522593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.522817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.522832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.532996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.533235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.533251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.543783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.544110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.544126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.552823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.553021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.553036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.563095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.563242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.563257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.573902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.574133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.584768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.585019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.585034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.595130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.595444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.595460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.605263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.605335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.605351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.614802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.615057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.615073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.624552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.624781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.624796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.634710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.634959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.634974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.645047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.645287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.645303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.656215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.656265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.656280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.665612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.665906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.665925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.599 [2024-11-28 13:09:51.676436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.599 [2024-11-28 13:09:51.676688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.599 [2024-11-28 13:09:51.676711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.600 [2024-11-28 13:09:51.686493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.600 [2024-11-28 13:09:51.686779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.600 [2024-11-28 13:09:51.686795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.600 [2024-11-28 13:09:51.696210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.600 [2024-11-28 13:09:51.696418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.600 [2024-11-28 13:09:51.696433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.600 [2024-11-28 13:09:51.706989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.600 [2024-11-28 13:09:51.707303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.600 [2024-11-28 13:09:51.707320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.600 [2024-11-28 13:09:51.717012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.600 [2024-11-28 13:09:51.717151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.600 [2024-11-28 13:09:51.717171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.861 [2024-11-28 13:09:51.726678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.861 [2024-11-28 13:09:51.726941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.861 [2024-11-28 13:09:51.726957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.861 [2024-11-28 13:09:51.735789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.736046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.736061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.745658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.745991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.746006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.753404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.753538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.753556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.763585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.763890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.763906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.769877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.769937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.769952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.775702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.775753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.779386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.779431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.779446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.783326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.783371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.783386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.787174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.787236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.790888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.790987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.791002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.799277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.799530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.799546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.807351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.807475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.807490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.812321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.812393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.812408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.818201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.818463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.818479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.821709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.821752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.821767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.824452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.824498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.824513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.827246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.827289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.827304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.829880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.829927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.829943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.832576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.832626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.832641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.835232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.835307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.835325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.837892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.837947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.837962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.840559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.840616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.840631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.843193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.843238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.843253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.845780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.845826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.845841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.848343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.848398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.848413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.850933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.850984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.850999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.853583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.862 [2024-11-28 13:09:51.853628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.862 [2024-11-28 13:09:51.853643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.862 [2024-11-28 13:09:51.859463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.859734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.859749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.867004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.867064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.867079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.872961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.873064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.873079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.881800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.881899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.881914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.890039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.890315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.890331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.895295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.895392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.895408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.898821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.898921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.898936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.903702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.904057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.904073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.908670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.908769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.908784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.912687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.912785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.912800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.918892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.919174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.919191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.927079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.927358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.927375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.931109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.931265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.931281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.935174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.935281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.935297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.942197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.942299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.942315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.945464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.945566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.945581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.948972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.949106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.949122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.952899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.952997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.953012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.956407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.956503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.956521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.959833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.959907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.959922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.964230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.964328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.964343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.968123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.968223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.968239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.973360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.973466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.973482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.978032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.978124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.978139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.981639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.981737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.981752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:21.863 [2024-11-28 13:09:51.984995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:21.863 [2024-11-28 13:09:51.985088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.863 [2024-11-28 13:09:51.985103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:51.988632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:51.988729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:51.988744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:51.992469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:51.992559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:51.992574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:51.997056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:51.997147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:51.997166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.000492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.000587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:52.000602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.003816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.003904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:52.003919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.007453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.007540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:52.007555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.012113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.012396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:52.012412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.017689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.017785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:52.017800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.025652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.025767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.126 [2024-11-28 13:09:52.025782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.126 [2024-11-28 13:09:52.028333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.126 [2024-11-28 13:09:52.028434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.028449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.031223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.031322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.031337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.033955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.034054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.034069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.036714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.036826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.036841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.039432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.039531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.039547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.042036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.042143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.042163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.044667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.044774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.044789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.047286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.047400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.047416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.049904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.050011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.052489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.052603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.052621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.055066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.055183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.055198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.057632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.057742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.057758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.060204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.060313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.060327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.062814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.062923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.062938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.065877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.066006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.066022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.073020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.073266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.073281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.081957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.082238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.082254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.091707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.091951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.091967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.102278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.102439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.102454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.106180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.106282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.106298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.109119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.109227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.109243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.112265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.112363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.112378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.115843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.116012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.116028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.121541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.121778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.121794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.131466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.131777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.131794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.142156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.142430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.142446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.152316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.152571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.152587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.127 [2024-11-28 13:09:52.162893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.127 [2024-11-28 13:09:52.163182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.127 [2024-11-28 13:09:52.163199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.128 [2024-11-28 13:09:52.173382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.128 [2024-11-28 13:09:52.173647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.128 [2024-11-28 13:09:52.173663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:22.128 [2024-11-28 13:09:52.183912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.128 [2024-11-28 13:09:52.184223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.128 [2024-11-28 13:09:52.184240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:22.128 [2024-11-28 13:09:52.194551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23d3d90) with pdu=0x200016eff3c8 00:39:22.128 [2024-11-28 13:09:52.195102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:22.128 [2024-11-28 13:09:52.195119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:22.128 6322.50 IOPS, 790.31 MiB/s 00:39:22.128 Latency(us) 00:39:22.128 [2024-11-28T12:09:52.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.128 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:22.128 nvme0n1 : 2.01 6312.96 789.12 0.00 0.00 2528.87 1040.08 11495.62 00:39:22.128 [2024-11-28T12:09:52.255Z] =================================================================================================================== 00:39:22.128 [2024-11-28T12:09:52.255Z] Total : 6312.96 789.12 0.00 0.00 2528.87 1040.08 11495.62 00:39:22.128 { 00:39:22.128 "results": [ 00:39:22.128 { 00:39:22.128 "job": "nvme0n1", 00:39:22.128 "core_mask": "0x2", 00:39:22.128 "workload": "randwrite", 00:39:22.128 "status": "finished", 00:39:22.128 "queue_depth": 16, 00:39:22.128 "io_size": 131072, 00:39:22.128 "runtime": 2.005399, 00:39:22.128 "iops": 6312.958169421646, 00:39:22.128 "mibps": 789.1197711777057, 00:39:22.128 "io_failed": 0, 00:39:22.128 "io_timeout": 0, 00:39:22.128 "avg_latency_us": 2528.870885145909, 00:39:22.128 "min_latency_us": 1040.0801871032409, 00:39:22.128 "max_latency_us": 11495.623120614768 00:39:22.128 } 00:39:22.128 ], 00:39:22.128 "core_count": 1 00:39:22.128 } 00:39:22.128 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:22.128 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:22.128 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:22.128 | .driver_specific 00:39:22.128 | .nvme_error 00:39:22.128 | .status_code 00:39:22.128 | .command_transient_transport_error' 00:39:22.128 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 409 > 0 )) 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3667942 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3667942 ']' 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3667942 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3667942 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3667942' 00:39:22.390 killing process with pid 3667942 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3667942 00:39:22.390 Received shutdown signal, test time was about 2.000000 seconds 00:39:22.390 00:39:22.390 Latency(us) 00:39:22.390 [2024-11-28T12:09:52.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.390 [2024-11-28T12:09:52.517Z] =================================================================================================================== 00:39:22.390 [2024-11-28T12:09:52.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:22.390 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3667942 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3665573 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3665573 ']' 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3665573 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665573 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665573' 00:39:22.651 killing process with pid 3665573 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3665573 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3665573 00:39:22.651 00:39:22.651 real 0m16.223s 00:39:22.651 user 0m31.532s 00:39:22.651 sys 0m3.667s 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.651 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:22.651 ************************************ 00:39:22.651 END TEST nvmf_digest_error 00:39:22.651 ************************************ 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:22.912 rmmod nvme_tcp 00:39:22.912 rmmod nvme_fabrics 00:39:22.912 rmmod nvme_keyring 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3665573 ']' 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3665573 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3665573 ']' 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3665573 00:39:22.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3665573) - No such process 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3665573 is not found' 00:39:22.912 Process with pid 3665573 is not found 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.912 13:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.824 13:09:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:24.824 00:39:24.824 real 0m43.041s 00:39:24.824 user 1m6.384s 00:39:24.824 sys 0m13.268s 00:39:24.824 13:09:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.824 13:09:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:24.824 ************************************ 00:39:24.824 END TEST nvmf_digest 00:39:24.824 ************************************ 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.085 13:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.085 ************************************ 00:39:25.085 START TEST nvmf_bdevperf 00:39:25.085 ************************************ 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:25.085 * Looking for test storage... 00:39:25.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:39:25.085 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:25.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.347 --rc genhtml_branch_coverage=1 00:39:25.347 --rc genhtml_function_coverage=1 00:39:25.347 --rc genhtml_legend=1 00:39:25.347 --rc geninfo_all_blocks=1 00:39:25.347 --rc geninfo_unexecuted_blocks=1 00:39:25.347 00:39:25.347 ' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:25.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.347 --rc genhtml_branch_coverage=1 00:39:25.347 --rc genhtml_function_coverage=1 00:39:25.347 --rc genhtml_legend=1 00:39:25.347 --rc geninfo_all_blocks=1 00:39:25.347 --rc geninfo_unexecuted_blocks=1 00:39:25.347 00:39:25.347 ' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:25.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.347 --rc genhtml_branch_coverage=1 00:39:25.347 --rc genhtml_function_coverage=1 00:39:25.347 --rc genhtml_legend=1 00:39:25.347 --rc geninfo_all_blocks=1 00:39:25.347 --rc geninfo_unexecuted_blocks=1 00:39:25.347 00:39:25.347 ' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:25.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.347 --rc genhtml_branch_coverage=1 00:39:25.347 --rc genhtml_function_coverage=1 00:39:25.347 --rc genhtml_legend=1 00:39:25.347 --rc geninfo_all_blocks=1 00:39:25.347 --rc geninfo_unexecuted_blocks=1 00:39:25.347 00:39:25.347 ' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:25.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.347 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:25.348 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:25.348 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:39:25.348 13:09:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:33.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:33.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.496 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:33.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:33.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:33.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:33.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:39:33.497 00:39:33.497 --- 10.0.0.2 ping statistics --- 00:39:33.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.497 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:33.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:33.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:39:33.497 00:39:33.497 --- 10.0.0.1 ping statistics --- 00:39:33.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.497 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3672683 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3672683 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3672683 ']' 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:33.497 13:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.497 [2024-11-28 13:10:02.794867] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:33.497 [2024-11-28 13:10:02.794938] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:33.497 [2024-11-28 13:10:02.939643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:33.497 [2024-11-28 13:10:02.999517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:33.497 [2024-11-28 13:10:03.027385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:33.497 [2024-11-28 13:10:03.027429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:33.497 [2024-11-28 13:10:03.027438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:33.497 [2024-11-28 13:10:03.027445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:33.497 [2024-11-28 13:10:03.027452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:33.497 [2024-11-28 13:10:03.029200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:33.497 [2024-11-28 13:10:03.029407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.498 [2024-11-28 13:10:03.029407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:33.498 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:33.498 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:39:33.498 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:33.498 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:33.498 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 [2024-11-28 13:10:03.670601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 Malloc0 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 [2024-11-28 13:10:03.745380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:33.760 { 00:39:33.760 "params": { 00:39:33.760 "name": "Nvme$subsystem", 00:39:33.760 "trtype": "$TEST_TRANSPORT", 00:39:33.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.760 "adrfam": "ipv4", 00:39:33.760 "trsvcid": "$NVMF_PORT", 00:39:33.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.760 "hdgst": ${hdgst:-false}, 00:39:33.760 "ddgst": ${ddgst:-false} 00:39:33.760 }, 00:39:33.760 "method": "bdev_nvme_attach_controller" 00:39:33.760 } 00:39:33.760 EOF 00:39:33.760 )") 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:39:33.760 13:10:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:33.760 "params": { 00:39:33.760 "name": "Nvme1", 00:39:33.760 "trtype": "tcp", 00:39:33.760 "traddr": "10.0.0.2", 00:39:33.760 "adrfam": "ipv4", 00:39:33.760 "trsvcid": "4420", 00:39:33.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:33.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:33.760 "hdgst": false, 00:39:33.760 "ddgst": false 00:39:33.760 }, 00:39:33.760 "method": "bdev_nvme_attach_controller" 00:39:33.760 }' 00:39:33.760 [2024-11-28 13:10:03.805266] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:33.760 [2024-11-28 13:10:03.805329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673029 ] 00:39:34.023 [2024-11-28 13:10:03.941813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:34.023 [2024-11-28 13:10:04.001295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.023 [2024-11-28 13:10:04.029437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.284 Running I/O for 1 seconds... 00:39:35.228 8450.00 IOPS, 33.01 MiB/s 00:39:35.228 Latency(us) 00:39:35.228 [2024-11-28T12:10:05.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.228 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:35.228 Verification LBA range: start 0x0 length 0x4000 00:39:35.228 Nvme1n1 : 1.01 8478.61 33.12 0.00 0.00 15029.96 3010.76 12645.19 00:39:35.228 [2024-11-28T12:10:05.355Z] =================================================================================================================== 00:39:35.228 [2024-11-28T12:10:05.355Z] Total : 8478.61 33.12 0.00 0.00 15029.96 3010.76 12645.19 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3673344 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:35.488 { 00:39:35.488 "params": { 00:39:35.488 "name": "Nvme$subsystem", 00:39:35.488 "trtype": "$TEST_TRANSPORT", 00:39:35.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:35.488 "adrfam": "ipv4", 00:39:35.488 "trsvcid": "$NVMF_PORT", 00:39:35.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:35.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:35.488 "hdgst": ${hdgst:-false}, 00:39:35.488 "ddgst": ${ddgst:-false} 00:39:35.488 }, 00:39:35.488 "method": "bdev_nvme_attach_controller" 00:39:35.488 } 00:39:35.488 EOF 00:39:35.488 )") 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:39:35.488 13:10:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:35.488 "params": { 00:39:35.488 "name": "Nvme1", 00:39:35.488 "trtype": "tcp", 00:39:35.488 "traddr": "10.0.0.2", 00:39:35.488 "adrfam": "ipv4", 00:39:35.488 "trsvcid": "4420", 00:39:35.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:35.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:35.488 "hdgst": false, 00:39:35.488 "ddgst": false 00:39:35.488 }, 00:39:35.488 "method": "bdev_nvme_attach_controller" 00:39:35.488 }' 00:39:35.488 [2024-11-28 13:10:05.482469] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:35.488 [2024-11-28 13:10:05.482531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673344 ] 00:39:35.749 [2024-11-28 13:10:05.614910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:35.749 [2024-11-28 13:10:05.671002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.749 [2024-11-28 13:10:05.688344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.749 Running I/O for 15 seconds... 00:39:38.072 10766.00 IOPS, 42.05 MiB/s [2024-11-28T12:10:08.467Z] 10836.50 IOPS, 42.33 MiB/s [2024-11-28T12:10:08.467Z] 13:10:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3672683 00:39:38.340 13:10:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:39:38.340 [2024-11-28 13:10:08.445079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.340 [2024-11-28 13:10:08.445423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.340 [2024-11-28 13:10:08.445434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.445983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.445993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-11-28 13:10:08.446118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.341 [2024-11-28 13:10:08.446128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.342 [2024-11-28 13:10:08.446727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.342 [2024-11-28 13:10:08.446734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:38.343 [2024-11-28 13:10:08.446819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:38.343 [2024-11-28 13:10:08.446838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:38.343 [2024-11-28 13:10:08.446855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:38.343 [2024-11-28 13:10:08.446872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.446990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.446999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:38.343 [2024-11-28 13:10:08.447023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.343 [2024-11-28 13:10:08.447352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.343 [2024-11-28 13:10:08.447361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a66d0 is same with the state(6) to be set 00:39:38.343 [2024-11-28 13:10:08.447370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:38.343 [2024-11-28 13:10:08.447376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:38.343 [2024-11-28 13:10:08.447383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:39:38.343 [2024-11-28 13:10:08.447391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.344 [2024-11-28 13:10:08.451058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.344 [2024-11-28 13:10:08.451109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.344 [2024-11-28 13:10:08.451915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.344 [2024-11-28 13:10:08.451934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.344 [2024-11-28 13:10:08.451943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.344 [2024-11-28 13:10:08.452166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.344 [2024-11-28 13:10:08.452387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.344 [2024-11-28 13:10:08.452396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.344 [2024-11-28 13:10:08.452406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.344 [2024-11-28 13:10:08.452415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.640 [2024-11-28 13:10:08.465121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.640 [2024-11-28 13:10:08.465627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.640 [2024-11-28 13:10:08.465667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.640 [2024-11-28 13:10:08.465678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.640 [2024-11-28 13:10:08.465917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.640 [2024-11-28 13:10:08.466139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.640 [2024-11-28 13:10:08.466148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.640 [2024-11-28 13:10:08.466157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.640 [2024-11-28 13:10:08.466174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.640 [2024-11-28 13:10:08.478894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.640 [2024-11-28 13:10:08.479549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.640 [2024-11-28 13:10:08.479587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.479599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.479837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.480059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.480068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.480076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.480084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.492780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.493476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.493515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.493526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.493764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.493985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.493995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.494003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.494011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.506705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.507288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.507326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.507348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.507586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.507809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.507818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.507825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.507834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.520533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.521000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.521020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.521028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.521253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.521472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.521480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.521487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.521494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.534384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.535048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.535087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.535099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.535349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.535573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.535582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.535590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.535598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.548301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.548952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.548991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.549003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.549252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.549480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.549489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.549497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.549505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.562096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.562648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.562669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.562677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.562895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.563113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.563121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.563128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.563135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.576031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.576581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.576598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.576605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.576823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.577040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.577048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.577056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.577062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.589959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.590650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.590689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.590699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.590937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.591167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.591177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.591189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.591198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.603879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.604468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.604506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.604519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.604759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.604981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.604990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.604998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.605006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.617704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.618285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.618305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.618313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.618531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.618750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.618758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.618765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.618772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.631460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.632041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.632051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.632297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.632519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.632528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.632537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.632544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.645240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.645783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.645802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.645810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.646029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.646252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.646261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.646269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.646275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.659164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.659801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.659839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.659850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.660087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.660317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.660327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.660335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.660343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.672906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.673576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.673614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.641 [2024-11-28 13:10:08.673625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.641 [2024-11-28 13:10:08.673862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.641 [2024-11-28 13:10:08.674083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.641 [2024-11-28 13:10:08.674092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.641 [2024-11-28 13:10:08.674099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.641 [2024-11-28 13:10:08.674107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.641 [2024-11-28 13:10:08.686802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.641 [2024-11-28 13:10:08.687477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.641 [2024-11-28 13:10:08.687515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.642 [2024-11-28 13:10:08.687532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.642 [2024-11-28 13:10:08.687771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.642 [2024-11-28 13:10:08.687992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.642 [2024-11-28 13:10:08.688002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.642 [2024-11-28 13:10:08.688009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.642 [2024-11-28 13:10:08.688017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.642 [2024-11-28 13:10:08.700704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.642 [2024-11-28 13:10:08.701287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.642 [2024-11-28 13:10:08.701324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.642 [2024-11-28 13:10:08.701337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.642 [2024-11-28 13:10:08.701578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.642 [2024-11-28 13:10:08.701800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.642 [2024-11-28 13:10:08.701809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.642 [2024-11-28 13:10:08.701817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.642 [2024-11-28 13:10:08.701825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.642 [2024-11-28 13:10:08.714529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.642 [2024-11-28 13:10:08.715118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.642 [2024-11-28 13:10:08.715138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.642 [2024-11-28 13:10:08.715146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.642 [2024-11-28 13:10:08.715370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.642 [2024-11-28 13:10:08.715589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.642 [2024-11-28 13:10:08.715598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.642 [2024-11-28 13:10:08.715606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.642 [2024-11-28 13:10:08.715613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.642 [2024-11-28 13:10:08.728298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.642 [2024-11-28 13:10:08.728848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.642 [2024-11-28 13:10:08.728887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.642 [2024-11-28 13:10:08.728900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.642 [2024-11-28 13:10:08.729140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.642 [2024-11-28 13:10:08.729374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.642 [2024-11-28 13:10:08.729384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.642 [2024-11-28 13:10:08.729393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.642 [2024-11-28 13:10:08.729402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.642 [2024-11-28 13:10:08.742107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.642 [2024-11-28 13:10:08.742746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.642 [2024-11-28 13:10:08.742786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.642 [2024-11-28 13:10:08.742797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.642 [2024-11-28 13:10:08.743034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.642 [2024-11-28 13:10:08.743265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.642 [2024-11-28 13:10:08.743275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.642 [2024-11-28 13:10:08.743283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.642 [2024-11-28 13:10:08.743291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.755979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.756640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.756678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.756690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.756928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.757150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.757167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.757176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.757184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.769870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.770397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.770417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.770426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.770644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.770862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.770871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.770883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.770890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.783782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.784293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.784310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.784318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.784536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.784754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.784762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.784769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.784776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.797672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.798279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.798317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.798328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.798565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.798787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.798796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.798803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.798811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.811521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.812063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.812082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.812090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.812314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.812533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.812542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.812549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.812556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.825455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.825913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.825930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.825937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.826155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.826379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.826388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.826396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.826403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 9433.67 IOPS, 36.85 MiB/s [2024-11-28T12:10:09.056Z] [2024-11-28 13:10:08.839497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.840130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.840177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.840190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.840432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.840654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.840662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.929 [2024-11-28 13:10:08.840670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.929 [2024-11-28 13:10:08.840678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.929 [2024-11-28 13:10:08.853370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.929 [2024-11-28 13:10:08.854016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.929 [2024-11-28 13:10:08.854054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.929 [2024-11-28 13:10:08.854065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.929 [2024-11-28 13:10:08.854311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.929 [2024-11-28 13:10:08.854533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.929 [2024-11-28 13:10:08.854542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.854550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.854557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.867299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.867880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.867900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.867912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.868131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.868356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.868365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.868372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.868379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.881055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.881652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.881691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.881702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.881940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.882169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.882179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.882186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.882195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.894891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.895490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.895510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.895518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.895738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.895956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.895965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.895972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.895979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.908665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.909192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.909210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.909218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.909437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.909660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.909669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.909676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.909683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.922592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.923260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.923302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.923314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.923556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.923778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.923787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.923795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.923803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.936516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.937225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.937269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.937282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.937522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.937756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.937766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.937774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.937782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.950282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.950925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.950968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.950979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.951229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.951452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.951463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.951477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.951485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.964196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.964836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.964849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.965092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.965324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.965335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.965344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.965353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.978075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.978685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.978708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.978716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.978936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.979156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.979171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.930 [2024-11-28 13:10:08.979179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.930 [2024-11-28 13:10:08.979186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.930 [2024-11-28 13:10:08.991886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.930 [2024-11-28 13:10:08.992446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.930 [2024-11-28 13:10:08.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.930 [2024-11-28 13:10:08.992475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.930 [2024-11-28 13:10:08.992694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.930 [2024-11-28 13:10:08.992912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.930 [2024-11-28 13:10:08.992922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.931 [2024-11-28 13:10:08.992929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.931 [2024-11-28 13:10:08.992936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.931 [2024-11-28 13:10:09.005652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.931 [2024-11-28 13:10:09.006199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.931 [2024-11-28 13:10:09.006219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.931 [2024-11-28 13:10:09.006227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.931 [2024-11-28 13:10:09.006445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.931 [2024-11-28 13:10:09.006664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.931 [2024-11-28 13:10:09.006673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.931 [2024-11-28 13:10:09.006680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.931 [2024-11-28 13:10:09.006687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.931 [2024-11-28 13:10:09.019405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.931 [2024-11-28 13:10:09.019962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.931 [2024-11-28 13:10:09.019983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.931 [2024-11-28 13:10:09.019991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.931 [2024-11-28 13:10:09.020217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.931 [2024-11-28 13:10:09.020438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.931 [2024-11-28 13:10:09.020448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.931 [2024-11-28 13:10:09.020455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.931 [2024-11-28 13:10:09.020463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.931 [2024-11-28 13:10:09.033170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.931 [2024-11-28 13:10:09.033770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.931 [2024-11-28 13:10:09.033790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.931 [2024-11-28 13:10:09.033798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.931 [2024-11-28 13:10:09.034017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.931 [2024-11-28 13:10:09.034243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.931 [2024-11-28 13:10:09.034252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.931 [2024-11-28 13:10:09.034261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.931 [2024-11-28 13:10:09.034268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:38.931 [2024-11-28 13:10:09.046988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:38.931 [2024-11-28 13:10:09.047574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.931 [2024-11-28 13:10:09.047596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:38.931 [2024-11-28 13:10:09.047610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:38.931 [2024-11-28 13:10:09.047829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:38.931 [2024-11-28 13:10:09.048048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:38.931 [2024-11-28 13:10:09.048057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:38.931 [2024-11-28 13:10:09.048064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:38.931 [2024-11-28 13:10:09.048071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.204 [2024-11-28 13:10:09.060779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.204 [2024-11-28 13:10:09.061269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.204 [2024-11-28 13:10:09.061291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.204 [2024-11-28 13:10:09.061299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.204 [2024-11-28 13:10:09.061517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.204 [2024-11-28 13:10:09.061737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.204 [2024-11-28 13:10:09.061753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.204 [2024-11-28 13:10:09.061761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.204 [2024-11-28 13:10:09.061769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.204 [2024-11-28 13:10:09.074685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.204 [2024-11-28 13:10:09.075246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.204 [2024-11-28 13:10:09.075301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.204 [2024-11-28 13:10:09.075315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.204 [2024-11-28 13:10:09.075568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.204 [2024-11-28 13:10:09.075793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.204 [2024-11-28 13:10:09.075804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.204 [2024-11-28 13:10:09.075812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.204 [2024-11-28 13:10:09.075820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.204 [2024-11-28 13:10:09.088550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.204 [2024-11-28 13:10:09.089238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.204 [2024-11-28 13:10:09.089292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.204 [2024-11-28 13:10:09.089306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.204 [2024-11-28 13:10:09.089557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.204 [2024-11-28 13:10:09.089788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.204 [2024-11-28 13:10:09.089798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.204 [2024-11-28 13:10:09.089806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.204 [2024-11-28 13:10:09.089815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.204 [2024-11-28 13:10:09.102344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.204 [2024-11-28 13:10:09.103062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.204 [2024-11-28 13:10:09.103113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.204 [2024-11-28 13:10:09.103126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.204 [2024-11-28 13:10:09.103382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.204 [2024-11-28 13:10:09.103606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.204 [2024-11-28 13:10:09.103615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.103623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.103632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.116174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.116848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.116902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.116914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.117175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.117401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.117411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.117419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.117428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.129937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.130631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.130690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.130702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.130953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.131191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.131201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.131216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.131225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.143754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.144498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.144561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.144574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.144828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.145054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.145063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.145072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.145081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.157601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.158274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.158337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.158350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.158604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.158830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.158840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.158849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.158858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.171381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.172061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.172123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.172136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.172405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.172631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.172640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.172649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.172658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.185195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.185885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.185948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.185961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.186226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.186453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.186463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.186471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.186480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.198991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.199726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.199788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.199801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.200056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.200296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.200307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.200315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.200324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.212844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.213588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.213651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.213665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.213919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.214144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.214153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.214189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.214199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.226710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.227471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.227533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.227553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.227809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.228035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.228044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.205 [2024-11-28 13:10:09.228053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.205 [2024-11-28 13:10:09.228062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.205 [2024-11-28 13:10:09.240608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.205 [2024-11-28 13:10:09.241221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.205 [2024-11-28 13:10:09.241285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.205 [2024-11-28 13:10:09.241300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.205 [2024-11-28 13:10:09.241556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.205 [2024-11-28 13:10:09.241782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.205 [2024-11-28 13:10:09.241793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.241801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.241810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.206 [2024-11-28 13:10:09.254541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.206 [2024-11-28 13:10:09.255266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.206 [2024-11-28 13:10:09.255330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.206 [2024-11-28 13:10:09.255343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.206 [2024-11-28 13:10:09.255598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.206 [2024-11-28 13:10:09.255823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.206 [2024-11-28 13:10:09.255834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.255842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.255851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.206 [2024-11-28 13:10:09.268385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.206 [2024-11-28 13:10:09.269090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.206 [2024-11-28 13:10:09.269153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.206 [2024-11-28 13:10:09.269180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.206 [2024-11-28 13:10:09.269434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.206 [2024-11-28 13:10:09.269668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.206 [2024-11-28 13:10:09.269677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.269686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.269695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.206 [2024-11-28 13:10:09.282211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.206 [2024-11-28 13:10:09.282897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.206 [2024-11-28 13:10:09.282959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.206 [2024-11-28 13:10:09.282972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.206 [2024-11-28 13:10:09.283241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.206 [2024-11-28 13:10:09.283469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.206 [2024-11-28 13:10:09.283478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.283487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.283496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.206 [2024-11-28 13:10:09.296007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.206 [2024-11-28 13:10:09.296697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.206 [2024-11-28 13:10:09.296760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.206 [2024-11-28 13:10:09.296773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.206 [2024-11-28 13:10:09.297027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.206 [2024-11-28 13:10:09.297267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.206 [2024-11-28 13:10:09.297277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.297285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.297295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.206 [2024-11-28 13:10:09.309873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.206 [2024-11-28 13:10:09.310583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.206 [2024-11-28 13:10:09.310646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.206 [2024-11-28 13:10:09.310659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.206 [2024-11-28 13:10:09.310914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.206 [2024-11-28 13:10:09.311139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.206 [2024-11-28 13:10:09.311149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.311178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.311187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.206 [2024-11-28 13:10:09.323722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.206 [2024-11-28 13:10:09.324469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.206 [2024-11-28 13:10:09.324532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.206 [2024-11-28 13:10:09.324545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.206 [2024-11-28 13:10:09.324799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.206 [2024-11-28 13:10:09.325026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.206 [2024-11-28 13:10:09.325035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.206 [2024-11-28 13:10:09.325043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.206 [2024-11-28 13:10:09.325052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.467 [2024-11-28 13:10:09.337592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.338271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.338334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.338347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.338602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.338842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.338853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.338861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.338870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.351413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.352063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.352125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.352138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.352406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.352633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.352642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.352651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.352660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.365178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.365859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.365922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.365935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.366204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.366430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.366439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.366448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.366457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.378966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.379662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.379725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.379738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.379993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.380233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.380243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.380251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.380260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.392773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.393328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.393392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.393407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.393661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.393887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.393897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.393906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.393916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.406652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.407243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.407276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.407293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.407515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.407736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.407746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.407755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.407763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.420505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.421178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.421240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.421254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.421509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.421735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.421744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.421754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.468 [2024-11-28 13:10:09.421763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.468 [2024-11-28 13:10:09.434302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.468 [2024-11-28 13:10:09.434980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.468 [2024-11-28 13:10:09.435043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.468 [2024-11-28 13:10:09.435057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.468 [2024-11-28 13:10:09.435328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.468 [2024-11-28 13:10:09.435555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.468 [2024-11-28 13:10:09.435566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.468 [2024-11-28 13:10:09.435575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.435585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.447594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.448221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.448275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.448285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.448469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.448636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.448645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.448651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.448658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.460273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.460841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.460893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.460902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.461083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.461253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.461261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.461267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.461273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.472998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.473626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.473674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.473684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.473863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.474019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.474027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.474033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.474041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.485636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.486058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.486085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.486244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.486397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.486403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.486414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.486420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.498261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.498762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.498778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.498784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.498935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.499086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.499092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.499097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.499103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.510942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.511316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.511332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.511338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.511489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.511641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.511648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.511653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.511658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.523650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.524111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.524126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.524131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.524287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.524437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.524444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.469 [2024-11-28 13:10:09.524449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.469 [2024-11-28 13:10:09.524453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.469 [2024-11-28 13:10:09.536279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.469 [2024-11-28 13:10:09.536872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.469 [2024-11-28 13:10:09.536908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.469 [2024-11-28 13:10:09.536916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.469 [2024-11-28 13:10:09.537085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.469 [2024-11-28 13:10:09.537247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.469 [2024-11-28 13:10:09.537254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.470 [2024-11-28 13:10:09.537261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.470 [2024-11-28 13:10:09.537267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.470 [2024-11-28 13:10:09.548967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.470 [2024-11-28 13:10:09.549578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.470 [2024-11-28 13:10:09.549612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.470 [2024-11-28 13:10:09.549621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.470 [2024-11-28 13:10:09.549789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.470 [2024-11-28 13:10:09.549942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.470 [2024-11-28 13:10:09.549949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.470 [2024-11-28 13:10:09.549954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.470 [2024-11-28 13:10:09.549960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.470 [2024-11-28 13:10:09.561657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.470 [2024-11-28 13:10:09.562242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.470 [2024-11-28 13:10:09.562275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.470 [2024-11-28 13:10:09.562284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.470 [2024-11-28 13:10:09.562451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.470 [2024-11-28 13:10:09.562604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.470 [2024-11-28 13:10:09.562610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.470 [2024-11-28 13:10:09.562616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.470 [2024-11-28 13:10:09.562622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.470 [2024-11-28 13:10:09.574319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.470 [2024-11-28 13:10:09.574905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.470 [2024-11-28 13:10:09.574936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.470 [2024-11-28 13:10:09.574948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.470 [2024-11-28 13:10:09.575115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.470 [2024-11-28 13:10:09.575276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.470 [2024-11-28 13:10:09.575283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.470 [2024-11-28 13:10:09.575288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.470 [2024-11-28 13:10:09.575294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.470 [2024-11-28 13:10:09.586938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.470 [2024-11-28 13:10:09.587346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.470 [2024-11-28 13:10:09.587377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.470 [2024-11-28 13:10:09.587386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.470 [2024-11-28 13:10:09.587555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.470 [2024-11-28 13:10:09.587708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.470 [2024-11-28 13:10:09.587714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.470 [2024-11-28 13:10:09.587719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.470 [2024-11-28 13:10:09.587725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.732 [2024-11-28 13:10:09.599563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.732 [2024-11-28 13:10:09.600070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.732 [2024-11-28 13:10:09.600100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.732 [2024-11-28 13:10:09.600109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.732 [2024-11-28 13:10:09.600284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.732 [2024-11-28 13:10:09.600438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.732 [2024-11-28 13:10:09.600444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.732 [2024-11-28 13:10:09.600450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.732 [2024-11-28 13:10:09.600456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.732 [2024-11-28 13:10:09.612142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.732 [2024-11-28 13:10:09.612643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.732 [2024-11-28 13:10:09.612659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.732 [2024-11-28 13:10:09.612665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.732 [2024-11-28 13:10:09.612815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.732 [2024-11-28 13:10:09.612969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.732 [2024-11-28 13:10:09.612975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.732 [2024-11-28 13:10:09.612980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.732 [2024-11-28 13:10:09.612985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.732 [2024-11-28 13:10:09.624826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.732 [2024-11-28 13:10:09.625433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.732 [2024-11-28 13:10:09.625464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.732 [2024-11-28 13:10:09.625473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.625639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.625793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.625801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.625807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.625813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.637506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.638081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.638112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.638121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.638295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.638448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.638455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.638460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.638465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.650157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.650632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.650647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.650653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.650803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.650953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.650959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.650968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.650973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.662808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.663297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.663327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.663336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.663504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.663656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.663662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.663668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.663674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.675504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.675961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.675975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.675981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.676131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.676288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.676295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.676300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.676304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.688123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.688614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.688627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.688632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.688781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.688931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.688937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.688941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.688946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.700722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.701277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.701307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.701316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.701484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.701637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.701643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.701648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.701654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.713341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.713912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.713942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.713951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.714116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.714276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.714284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.714289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.714295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.726006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.726597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.726627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.726636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.726801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.726954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.726960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.726966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.726972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.738660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.739138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.739152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.739168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.739318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.733 [2024-11-28 13:10:09.739476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.733 [2024-11-28 13:10:09.739482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.733 [2024-11-28 13:10:09.739487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.733 [2024-11-28 13:10:09.739492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.733 [2024-11-28 13:10:09.751320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.733 [2024-11-28 13:10:09.751907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.733 [2024-11-28 13:10:09.751937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.733 [2024-11-28 13:10:09.751945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.733 [2024-11-28 13:10:09.752111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.752272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.752279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.752285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.752291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.763973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.764568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.764599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.764608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.764773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.764926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.764932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.764938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.764943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.776628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.777127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.777142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.777148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.777303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.777464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.777470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.777475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.777480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.789300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.789876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.789906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.789915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.790080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.790241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.790248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.790253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.790259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.801940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.802411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.802427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.802433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.802583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.802733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.802739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.802743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.802748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.814569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.815049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.815062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.815068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.815223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.815373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.815379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.815390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.815395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.827226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.827678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.827691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.827696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.827846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.827996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.828001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.828006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.828011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 7075.25 IOPS, 27.64 MiB/s [2024-11-28T12:10:09.861Z] [2024-11-28 13:10:09.839829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.840301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.840315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.840320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.840470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.840620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.840625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.840630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.840635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.734 [2024-11-28 13:10:09.852456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.734 [2024-11-28 13:10:09.853040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.734 [2024-11-28 13:10:09.853070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.734 [2024-11-28 13:10:09.853079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.734 [2024-11-28 13:10:09.853252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.734 [2024-11-28 13:10:09.853406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.734 [2024-11-28 13:10:09.853412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.734 [2024-11-28 13:10:09.853417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.734 [2024-11-28 13:10:09.853423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.865108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.865591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.865621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.865629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.865797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.865950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.865956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.865961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.865967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.877811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.878265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.878295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.878304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.878472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.878625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.878632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.878637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.878643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.890484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.891076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.891106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.891115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.891287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.891441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.891447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.891453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.891458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.903145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.903712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.903742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.903754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.903919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.904072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.904078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.904084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.904089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.915774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.916268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.916299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.916308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.916476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.916629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.916635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.916641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.916646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.928353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.928929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.928959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.928968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.929133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.929294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.929301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.929306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.929312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.941007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.941495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.941511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.941516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.941667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.941821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.941827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.941832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.941837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.997 [2024-11-28 13:10:09.953655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.997 [2024-11-28 13:10:09.954215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.997 [2024-11-28 13:10:09.954245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.997 [2024-11-28 13:10:09.954253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.997 [2024-11-28 13:10:09.954419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.997 [2024-11-28 13:10:09.954572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.997 [2024-11-28 13:10:09.954578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.997 [2024-11-28 13:10:09.954584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.997 [2024-11-28 13:10:09.954589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:09.966276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:09.966767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:09.966781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:09.966787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:09.966937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:09.967087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:09.967093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:09.967098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:09.967103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:09.978929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:09.979459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:09.979490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:09.979499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:09.979667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:09.979820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:09.979826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:09.979835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:09.979841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:09.991521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:09.991996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:09.992011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:09.992017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:09.992173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:09.992324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:09.992330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:09.992335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:09.992339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.004663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.005286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.005316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.005325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.005493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.005647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.005654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.005659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.005665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.017287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.017816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.017846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.017855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.018021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.018187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.018194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.018200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.018206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.029893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.030522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.030553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.030562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.030728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.030881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.030887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.030892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.030898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.042605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.043115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.043145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.043154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.043331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.043484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.043491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.043496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.043502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.055184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.055781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.055811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.055820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.055986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.056139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.056145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.056150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.056156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.067849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.068489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.068520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.068531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.068697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.068849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.068856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.068861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.068867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.998 [2024-11-28 13:10:10.080558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.998 [2024-11-28 13:10:10.081133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.998 [2024-11-28 13:10:10.081168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.998 [2024-11-28 13:10:10.081178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.998 [2024-11-28 13:10:10.081343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.998 [2024-11-28 13:10:10.081496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.998 [2024-11-28 13:10:10.081502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.998 [2024-11-28 13:10:10.081508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.998 [2024-11-28 13:10:10.081514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.999 [2024-11-28 13:10:10.093226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.999 [2024-11-28 13:10:10.093807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.999 [2024-11-28 13:10:10.093837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.999 [2024-11-28 13:10:10.093846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.999 [2024-11-28 13:10:10.094012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.999 [2024-11-28 13:10:10.094172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.999 [2024-11-28 13:10:10.094179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.999 [2024-11-28 13:10:10.094184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.999 [2024-11-28 13:10:10.094190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.999 [2024-11-28 13:10:10.105870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.999 [2024-11-28 13:10:10.106262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.999 [2024-11-28 13:10:10.106293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.999 [2024-11-28 13:10:10.106301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.999 [2024-11-28 13:10:10.106470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.999 [2024-11-28 13:10:10.106627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.999 [2024-11-28 13:10:10.106633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.999 [2024-11-28 13:10:10.106638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.999 [2024-11-28 13:10:10.106644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:39.999 [2024-11-28 13:10:10.118502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:39.999 [2024-11-28 13:10:10.119103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.999 [2024-11-28 13:10:10.119134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:39.999 [2024-11-28 13:10:10.119143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:39.999 [2024-11-28 13:10:10.119317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:39.999 [2024-11-28 13:10:10.119471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:39.999 [2024-11-28 13:10:10.119477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:39.999 [2024-11-28 13:10:10.119483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:39.999 [2024-11-28 13:10:10.119489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.261 [2024-11-28 13:10:10.131179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.261 [2024-11-28 13:10:10.131645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.261 [2024-11-28 13:10:10.131674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.261 [2024-11-28 13:10:10.131683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.261 [2024-11-28 13:10:10.131852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.261 [2024-11-28 13:10:10.132005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.261 [2024-11-28 13:10:10.132011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.261 [2024-11-28 13:10:10.132017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.261 [2024-11-28 13:10:10.132022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.261 [2024-11-28 13:10:10.143874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.261 [2024-11-28 13:10:10.144445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.261 [2024-11-28 13:10:10.144475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.261 [2024-11-28 13:10:10.144484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.261 [2024-11-28 13:10:10.144649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.261 [2024-11-28 13:10:10.144802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.261 [2024-11-28 13:10:10.144808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.261 [2024-11-28 13:10:10.144818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.261 [2024-11-28 13:10:10.144823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.261 [2024-11-28 13:10:10.156546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.261 [2024-11-28 13:10:10.157058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.261 [2024-11-28 13:10:10.157073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.261 [2024-11-28 13:10:10.157079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.261 [2024-11-28 13:10:10.157234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.261 [2024-11-28 13:10:10.157384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.261 [2024-11-28 13:10:10.157390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.261 [2024-11-28 13:10:10.157395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.261 [2024-11-28 13:10:10.157400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.261 [2024-11-28 13:10:10.169229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.261 [2024-11-28 13:10:10.169805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.261 [2024-11-28 13:10:10.169835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.261 [2024-11-28 13:10:10.169844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.261 [2024-11-28 13:10:10.170009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.170169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.170176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.170182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.170187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.181891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.182346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.182368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.182519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.182669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.182675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.182680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.182685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.194538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.194915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.194929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.194934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.195084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.195239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.195246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.195251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.195256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.207242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.207757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.207787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.207796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.207965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.208117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.208123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.208128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.208134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.219895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.220459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.220490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.220498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.220667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.220819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.220825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.220831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.220837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.232548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.233136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.233175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.233189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.233357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.233510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.233516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.233522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.233528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.245254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.245753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.245769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.245774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.245925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.246075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.246081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.246086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.246091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.257938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.258492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.258522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.258530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.258695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.258848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.258854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.258860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.258865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.270559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.271027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.271042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.271047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.271203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.271358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.271364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.271369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.271375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.283227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.283749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.283779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.283788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.262 [2024-11-28 13:10:10.283954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.262 [2024-11-28 13:10:10.284106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.262 [2024-11-28 13:10:10.284113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.262 [2024-11-28 13:10:10.284119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.262 [2024-11-28 13:10:10.284124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.262 [2024-11-28 13:10:10.295842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.262 [2024-11-28 13:10:10.296462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.262 [2024-11-28 13:10:10.296493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.262 [2024-11-28 13:10:10.296501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.296667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.296819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.296825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.296831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.296836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.308552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.309031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.309062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.309070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.309243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.309396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.309402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.309412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.309417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.321145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.321594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.321610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.321616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.321766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.321916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.321922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.321927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.321932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.333775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.334283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.334313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.334322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.334491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.334643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.334649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.334655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.334661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.346368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.346868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.346883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.346889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.347040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.347196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.347202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.347207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.347212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.359051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.359508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.359522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.359527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.359677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.359828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.359834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.359839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.359844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.371683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.372266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.372296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.372305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.372473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.372625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.372632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.372637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.372643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.263 [2024-11-28 13:10:10.384346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.263 [2024-11-28 13:10:10.384848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.263 [2024-11-28 13:10:10.384863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.263 [2024-11-28 13:10:10.384869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.263 [2024-11-28 13:10:10.385019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.263 [2024-11-28 13:10:10.385173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.263 [2024-11-28 13:10:10.385180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.263 [2024-11-28 13:10:10.385185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.263 [2024-11-28 13:10:10.385189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.397053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.397540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.397554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.397563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.397713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.397862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.397869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.397873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.397878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.409718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.410170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.410183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.410189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.410339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.410488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.410494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.410499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.410504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.422352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.422815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.422827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.422833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.422982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.423132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.423137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.423142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.423146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.434985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.435468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.435480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.435486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.435636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.435792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.435797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.435802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.435807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.447656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.448143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.448156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.448166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.448315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.448464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.448470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.448475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.448479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.460322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.460807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.460818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.460823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.460973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.461122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.461128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.461133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.461137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.472967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.473457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.473470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.473475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.473625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.473775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.473780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.473788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.526 [2024-11-28 13:10:10.473793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.526 [2024-11-28 13:10:10.485629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.526 [2024-11-28 13:10:10.486055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.526 [2024-11-28 13:10:10.486067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.526 [2024-11-28 13:10:10.486073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.526 [2024-11-28 13:10:10.486228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.526 [2024-11-28 13:10:10.486378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.526 [2024-11-28 13:10:10.486384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.526 [2024-11-28 13:10:10.486389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.486394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.498235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.498826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.498857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.498865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.499031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.499193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.499200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.499206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.499212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.510925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.511499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.511529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.511538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.511704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.511856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.511863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.511868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.511874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.523591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.524201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.524232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.524241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.524406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.524559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.524566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.524571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.524577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.536283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.536866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.536875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.537040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.537197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.537204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.537209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.537215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.548920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.549395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.549411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.549416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.549567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.549717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.549723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.549727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.549733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.561590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.561928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.561942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.561952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.562103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.562258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.562264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.562269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.562273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.574252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.574693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.574706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.574711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.574860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.575010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.575015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.575020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.575025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.586869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.587427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.587457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.587466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.587631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.587784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.587790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.587795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.587801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.599501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.599999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.600014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.600020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.600175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.600330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.600336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.600341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.527 [2024-11-28 13:10:10.600346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.527 [2024-11-28 13:10:10.612272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.527 [2024-11-28 13:10:10.612612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.527 [2024-11-28 13:10:10.612625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.527 [2024-11-28 13:10:10.612630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.527 [2024-11-28 13:10:10.612780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.527 [2024-11-28 13:10:10.612930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.527 [2024-11-28 13:10:10.612936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.527 [2024-11-28 13:10:10.612941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.528 [2024-11-28 13:10:10.612946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.528 [2024-11-28 13:10:10.624938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.528 [2024-11-28 13:10:10.625409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.528 [2024-11-28 13:10:10.625422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.528 [2024-11-28 13:10:10.625427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.528 [2024-11-28 13:10:10.625577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.528 [2024-11-28 13:10:10.625727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.528 [2024-11-28 13:10:10.625733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.528 [2024-11-28 13:10:10.625737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.528 [2024-11-28 13:10:10.625742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.528 [2024-11-28 13:10:10.637587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.528 [2024-11-28 13:10:10.638072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.528 [2024-11-28 13:10:10.638084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.528 [2024-11-28 13:10:10.638090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.528 [2024-11-28 13:10:10.638243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.528 [2024-11-28 13:10:10.638394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.528 [2024-11-28 13:10:10.638400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.528 [2024-11-28 13:10:10.638408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.528 [2024-11-28 13:10:10.638413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.528 [2024-11-28 13:10:10.650311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.650806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.650819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.650825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.650974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.651124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.651130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.651135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.651139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.662986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.663456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.663469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.663474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.663624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.663773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.663779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.663784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.663788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.675634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.676117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.676130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.676135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.676289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.676440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.676445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.676450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.676455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.688301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.688765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.688777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.688782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.688932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.689081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.689087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.689092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.689096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.700937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.701448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.701461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.701466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.701616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.701765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.701771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.701776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.701780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.713612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.714103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.714116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.714121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.714275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.714426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.714432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.714436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.714441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.726291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.726761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.726773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.726781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.726931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.727080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.727086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.727090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.727095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.738931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.739395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.739408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.739413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.739563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.739714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.739719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.739724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.739729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.751573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.752032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.791 [2024-11-28 13:10:10.752045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.791 [2024-11-28 13:10:10.752051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.791 [2024-11-28 13:10:10.752206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.791 [2024-11-28 13:10:10.752357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.791 [2024-11-28 13:10:10.752362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.791 [2024-11-28 13:10:10.752367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.791 [2024-11-28 13:10:10.752372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.791 [2024-11-28 13:10:10.764219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.791 [2024-11-28 13:10:10.764565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.764577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.764582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.764731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.764884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.764890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.764895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.764899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.776879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.777330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.777343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.777349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.777498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.777647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.777653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.777658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.777662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.789501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.789977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.789990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.789995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.790144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.790299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.790305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.790310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.790314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.802152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.802614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.802626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.802631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.802780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.802930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.802935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.802943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.802949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.814791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.815268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.815281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.815286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.815435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.815585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.815591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.815596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.815601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.827450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.828032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.828062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.828071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.828243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.828397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.828403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.828409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.828414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 5660.20 IOPS, 22.11 MiB/s [2024-11-28T12:10:10.919Z] [2024-11-28 13:10:10.840123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.840680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.840710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.840719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.840884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.841037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.841043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.841049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.841055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.852778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.853230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.853246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.853251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.853402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.853552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.853558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.853563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.853568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.865410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.865858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.865872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.865877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.866027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.866181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.866187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.866192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.866197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.878033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.792 [2024-11-28 13:10:10.878502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.792 [2024-11-28 13:10:10.878516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.792 [2024-11-28 13:10:10.878521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.792 [2024-11-28 13:10:10.878670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.792 [2024-11-28 13:10:10.878820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.792 [2024-11-28 13:10:10.878826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.792 [2024-11-28 13:10:10.878831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.792 [2024-11-28 13:10:10.878835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.792 [2024-11-28 13:10:10.890698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.793 [2024-11-28 13:10:10.891155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.793 [2024-11-28 13:10:10.891173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.793 [2024-11-28 13:10:10.891182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.793 [2024-11-28 13:10:10.891331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.793 [2024-11-28 13:10:10.891482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.793 [2024-11-28 13:10:10.891488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.793 [2024-11-28 13:10:10.891492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.793 [2024-11-28 13:10:10.891497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:40.793 [2024-11-28 13:10:10.903336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:40.793 [2024-11-28 13:10:10.903781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.793 [2024-11-28 13:10:10.903794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:40.793 [2024-11-28 13:10:10.903799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:40.793 [2024-11-28 13:10:10.903949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:40.793 [2024-11-28 13:10:10.904099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:40.793 [2024-11-28 13:10:10.904105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:40.793 [2024-11-28 13:10:10.904110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:40.793 [2024-11-28 13:10:10.904114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.056 [2024-11-28 13:10:10.915988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.056 [2024-11-28 13:10:10.916480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.056 [2024-11-28 13:10:10.916493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.056 [2024-11-28 13:10:10.916498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.056 [2024-11-28 13:10:10.916648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.056 [2024-11-28 13:10:10.916797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.056 [2024-11-28 13:10:10.916803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.056 [2024-11-28 13:10:10.916808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.056 [2024-11-28 13:10:10.916812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.056 [2024-11-28 13:10:10.928654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.056 [2024-11-28 13:10:10.929257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.056 [2024-11-28 13:10:10.929287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.056 [2024-11-28 13:10:10.929295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.056 [2024-11-28 13:10:10.929461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.056 [2024-11-28 13:10:10.929618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.056 [2024-11-28 13:10:10.929624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.056 [2024-11-28 13:10:10.929630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.056 [2024-11-28 13:10:10.929635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.056 [2024-11-28 13:10:10.941358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.056 [2024-11-28 13:10:10.941848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.056 [2024-11-28 13:10:10.941877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.056 [2024-11-28 13:10:10.941886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.056 [2024-11-28 13:10:10.942052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.056 [2024-11-28 13:10:10.942213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.056 [2024-11-28 13:10:10.942221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.056 [2024-11-28 13:10:10.942227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.056 [2024-11-28 13:10:10.942232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.056 [2024-11-28 13:10:10.954065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.056 [2024-11-28 13:10:10.954648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.056 [2024-11-28 13:10:10.954678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.056 [2024-11-28 13:10:10.954687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.056 [2024-11-28 13:10:10.954852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.056 [2024-11-28 13:10:10.955005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.056 [2024-11-28 13:10:10.955011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:10.955016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:10.955022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:10.966718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:10.967296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:10.967327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:10.967335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:10.967501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:10.967654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:10.967660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:10.967669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:10.967674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:10.979382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:10.979925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:10.979955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:10.979963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:10.980128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:10.980290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:10.980297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:10.980303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:10.980308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:10.991998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:10.992581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:10.992611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:10.992619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:10.992787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:10.992940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:10.992946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:10.992952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:10.992958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:11.004674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:11.005138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:11.005153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:11.005164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:11.005315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:11.005465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:11.005470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:11.005475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:11.005480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:11.017319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:11.017805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:11.017818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:11.017823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:11.017973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:11.018123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:11.018128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:11.018133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:11.018138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:11.029973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:11.030542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:11.030573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:11.030582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:11.030747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:11.030900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:11.030906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:11.030912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:11.030917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:11.042641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:11.043137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:11.043152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.057 [2024-11-28 13:10:11.043163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.057 [2024-11-28 13:10:11.043314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.057 [2024-11-28 13:10:11.043464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.057 [2024-11-28 13:10:11.043470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.057 [2024-11-28 13:10:11.043475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.057 [2024-11-28 13:10:11.043480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.057 [2024-11-28 13:10:11.055312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.057 [2024-11-28 13:10:11.055751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.057 [2024-11-28 13:10:11.055781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.055796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.055962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.056114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.056120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.056126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.056132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.067973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.068559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.068589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.068598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.068763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.068916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.068922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.068928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.068933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.080652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.081145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.081164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.081170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.081321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.081470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.081476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.081481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.081486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.093322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.093858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.093897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.094062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.094227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.094234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.094240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.094245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.105935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.106492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.106522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.106530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.106695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.106848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.106854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.106860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.106865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.118575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.119105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.119135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.119144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.119321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.119475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.119481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.119487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.119493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.131187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.131759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.131790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.131798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.131964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.132117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.132123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.132132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.132137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.143855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.144516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.058 [2024-11-28 13:10:11.144546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.058 [2024-11-28 13:10:11.144555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.058 [2024-11-28 13:10:11.144723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.058 [2024-11-28 13:10:11.144876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.058 [2024-11-28 13:10:11.144882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.058 [2024-11-28 13:10:11.144888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.058 [2024-11-28 13:10:11.144893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.058 [2024-11-28 13:10:11.156449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.058 [2024-11-28 13:10:11.156941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.059 [2024-11-28 13:10:11.156956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.059 [2024-11-28 13:10:11.156961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.059 [2024-11-28 13:10:11.157111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.059 [2024-11-28 13:10:11.157268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.059 [2024-11-28 13:10:11.157274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.059 [2024-11-28 13:10:11.157279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.059 [2024-11-28 13:10:11.157284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.059 [2024-11-28 13:10:11.169107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.059 [2024-11-28 13:10:11.169674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.059 [2024-11-28 13:10:11.169704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.059 [2024-11-28 13:10:11.169712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.059 [2024-11-28 13:10:11.169878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.059 [2024-11-28 13:10:11.170030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.059 [2024-11-28 13:10:11.170036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.059 [2024-11-28 13:10:11.170042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.059 [2024-11-28 13:10:11.170048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.321 [2024-11-28 13:10:11.181761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.321 [2024-11-28 13:10:11.182352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.321 [2024-11-28 13:10:11.182383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.182392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.182557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.182709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.182716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.182721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.182727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.194428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.194885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.194900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.194906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.195056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.195212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.195218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.195223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.195229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.207063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.207680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.207710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.207719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.207885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.208038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.208044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.208050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.208055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.219775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.220248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.220278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.220291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.220459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.220611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.220618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.220623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.220629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.232474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.233077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.233107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.233115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.233291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.233444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.233450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.233456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.233461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.245178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.245683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.245713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.245723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.245890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.246043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.246050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.246055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.246061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.257765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.258151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.258171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.258177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.258328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.258482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.258488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.258494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.258499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.270477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.270929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.270942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.270947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.271097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.271251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.271257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.271262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.271267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.283101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.283668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.283698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.283707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.283872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.284025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.284031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.284037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.284042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.295737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.296296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.322 [2024-11-28 13:10:11.296326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.322 [2024-11-28 13:10:11.296334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.322 [2024-11-28 13:10:11.296502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.322 [2024-11-28 13:10:11.296655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.322 [2024-11-28 13:10:11.296661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.322 [2024-11-28 13:10:11.296670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.322 [2024-11-28 13:10:11.296676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.322 [2024-11-28 13:10:11.308369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.322 [2024-11-28 13:10:11.308865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.308879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.308885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.309035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.309191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.309198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.309203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.309207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.321040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.321505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.321519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.321524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.321673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.321823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.321829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.321833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.321838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.333657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.334232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.334262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.334271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.334436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.334589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.334595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.334600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.334606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.346302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.346879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.346909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.346918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.347084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.347244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.347251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.347257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.347263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.358948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.359438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.359447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.359613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.359766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.359772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.359777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.359783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.371611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.372170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.372200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.372208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.372376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.372529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.372535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.372541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.372546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.384249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.384822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.384853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.384864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.385030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.385190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.385197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.385202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.385208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.396892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.397478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.397509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.397517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.397684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.397837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.397843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.397849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.397854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.409546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.410125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.410155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.410171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.410339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.410491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.410497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.410503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.410508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.422205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.422785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.323 [2024-11-28 13:10:11.422815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.323 [2024-11-28 13:10:11.422823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.323 [2024-11-28 13:10:11.422988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.323 [2024-11-28 13:10:11.423145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.323 [2024-11-28 13:10:11.423151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.323 [2024-11-28 13:10:11.423157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.323 [2024-11-28 13:10:11.423172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.323 [2024-11-28 13:10:11.434852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.323 [2024-11-28 13:10:11.435463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.324 [2024-11-28 13:10:11.435493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.324 [2024-11-28 13:10:11.435502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.324 [2024-11-28 13:10:11.435667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.324 [2024-11-28 13:10:11.435820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.324 [2024-11-28 13:10:11.435826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.324 [2024-11-28 13:10:11.435831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.324 [2024-11-28 13:10:11.435837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3672683 Killed "${NVMF_APP[@]}" "$@" 00:39:41.324 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:41.324 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:41.324 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:41.324 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:41.324 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.587 [2024-11-28 13:10:11.447542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.587 [2024-11-28 13:10:11.448133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.587 [2024-11-28 13:10:11.448169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.587 [2024-11-28 13:10:11.448178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.587 [2024-11-28 13:10:11.448346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.587 [2024-11-28 13:10:11.448500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.587 [2024-11-28 13:10:11.448506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.587 [2024-11-28 13:10:11.448511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.587 [2024-11-28 13:10:11.448517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3674388 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3674388 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3674388 ']' 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.587 13:10:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.587 [2024-11-28 13:10:11.460217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.587 [2024-11-28 13:10:11.460744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.587 [2024-11-28 13:10:11.460760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.587 [2024-11-28 13:10:11.460765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.587 [2024-11-28 13:10:11.460915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.587 [2024-11-28 13:10:11.461065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.587 [2024-11-28 13:10:11.461071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.587 [2024-11-28 13:10:11.461076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.587 [2024-11-28 13:10:11.461081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.587 [2024-11-28 13:10:11.472905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.587 [2024-11-28 13:10:11.473464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.587 [2024-11-28 13:10:11.473494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.587 [2024-11-28 13:10:11.473503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.587 [2024-11-28 13:10:11.473668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.587 [2024-11-28 13:10:11.473821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.587 [2024-11-28 13:10:11.473827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.587 [2024-11-28 13:10:11.473833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.587 [2024-11-28 13:10:11.473838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.587 [2024-11-28 13:10:11.485568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.587 [2024-11-28 13:10:11.486132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.587 [2024-11-28 13:10:11.486173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.587 [2024-11-28 13:10:11.486183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.587 [2024-11-28 13:10:11.486348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.587 [2024-11-28 13:10:11.486501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.587 [2024-11-28 13:10:11.486511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.587 [2024-11-28 13:10:11.486517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.587 [2024-11-28 13:10:11.486522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.587 [2024-11-28 13:10:11.498212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.587 [2024-11-28 13:10:11.498900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.587 [2024-11-28 13:10:11.498930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.587 [2024-11-28 13:10:11.498939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.587 [2024-11-28 13:10:11.499108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.587 [2024-11-28 13:10:11.499268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.587 [2024-11-28 13:10:11.499275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.587 [2024-11-28 13:10:11.499282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.587 [2024-11-28 13:10:11.499288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.587 [2024-11-28 13:10:11.502633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:39:41.587 [2024-11-28 13:10:11.502679] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.587 [2024-11-28 13:10:11.510835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.587 [2024-11-28 13:10:11.511429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.587 [2024-11-28 13:10:11.511460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.587 [2024-11-28 13:10:11.511469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.511637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.511790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.511796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.511801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.511807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.523533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.524037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.524052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.524058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.524213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.524364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.524374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.524379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.524385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.536216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.536771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.536801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.536810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.536976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.537128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.537134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.537140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.537146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.548850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.549465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.549495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.549504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.549670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.549822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.549828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.549834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.549839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.561530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.561992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.562022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.562031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.562203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.562357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.562363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.562369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.562378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.574214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.574786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.574816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.574825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.574990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.575143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.575149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.575154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.575169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.586864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.587448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.587478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.587487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.587652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.587805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.587811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.587817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.587822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.599522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.599978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.600008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.600017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.600192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.600346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.600352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.600358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.600363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.612189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.612743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.612773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.612782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.612948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.613101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.613107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.613113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.613119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.624824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.625457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.625487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.625496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.588 [2024-11-28 13:10:11.625661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.588 [2024-11-28 13:10:11.625814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.588 [2024-11-28 13:10:11.625820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.588 [2024-11-28 13:10:11.625825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.588 [2024-11-28 13:10:11.625830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.588 [2024-11-28 13:10:11.637536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.588 [2024-11-28 13:10:11.638111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.588 [2024-11-28 13:10:11.638142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.588 [2024-11-28 13:10:11.638150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.589 [2024-11-28 13:10:11.638326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.589 [2024-11-28 13:10:11.638479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.589 [2024-11-28 13:10:11.638485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.589 [2024-11-28 13:10:11.638491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.589 [2024-11-28 13:10:11.638497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.589 [2024-11-28 13:10:11.642377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:41.589 [2024-11-28 13:10:11.650132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.589 [2024-11-28 13:10:11.650714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.589 [2024-11-28 13:10:11.650747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.589 [2024-11-28 13:10:11.650756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.589 [2024-11-28 13:10:11.650922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.589 [2024-11-28 13:10:11.651075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.589 [2024-11-28 13:10:11.651081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.589 [2024-11-28 13:10:11.651087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.589 [2024-11-28 13:10:11.651092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.589 [2024-11-28 13:10:11.662788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.589 [2024-11-28 13:10:11.663174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.589 [2024-11-28 13:10:11.663190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.589 [2024-11-28 13:10:11.663197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.589 [2024-11-28 13:10:11.663347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.589 [2024-11-28 13:10:11.663498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.589 [2024-11-28 13:10:11.663504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.589 [2024-11-28 13:10:11.663509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.589 [2024-11-28 13:10:11.663515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.589 [2024-11-28 13:10:11.675571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.589 [2024-11-28 13:10:11.676018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.589 [2024-11-28 13:10:11.676049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.589 [2024-11-28 13:10:11.676057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.589 [2024-11-28 13:10:11.676229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.589 [2024-11-28 13:10:11.676382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.589 [2024-11-28 13:10:11.676389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.589 [2024-11-28 13:10:11.676394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.589 [2024-11-28 13:10:11.676400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.589 [2024-11-28 13:10:11.688235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.589 [2024-11-28 13:10:11.688702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.589 [2024-11-28 13:10:11.688719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.589 [2024-11-28 13:10:11.688724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.589 [2024-11-28 13:10:11.688875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.589 [2024-11-28 13:10:11.689032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.589 [2024-11-28 13:10:11.689038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.589 [2024-11-28 13:10:11.689043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.589 [2024-11-28 13:10:11.689048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.589 [2024-11-28 13:10:11.697834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:41.589 [2024-11-28 13:10:11.700898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.589 [2024-11-28 13:10:11.701470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.589 [2024-11-28 13:10:11.701501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.589 [2024-11-28 13:10:11.701509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.589 [2024-11-28 13:10:11.701676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.589 [2024-11-28 13:10:11.701829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.589 [2024-11-28 13:10:11.701836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.589 [2024-11-28 13:10:11.701841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.589 [2024-11-28 13:10:11.701847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.713453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:41.853 [2024-11-28 13:10:11.713477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:41.853 [2024-11-28 13:10:11.713484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:41.853 [2024-11-28 13:10:11.713490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:41.853 [2024-11-28 13:10:11.713494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:41.853 [2024-11-28 13:10:11.713555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.714238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.714247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.714417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.714570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.714576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.714582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.714588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.714579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:41.853 [2024-11-28 13:10:11.714731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.853 [2024-11-28 13:10:11.714733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:41.853 [2024-11-28 13:10:11.726167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.726784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.726816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.726825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.726992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.727145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.727151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.727157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.727172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.738876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.739398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.739415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.739422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.739572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.739723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.739729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.739735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.739740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.751583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.752082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.752113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.752122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.752296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.752450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.752456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.752461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.752467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.764166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.764685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.764704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.764710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.764861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.765012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.765018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.765023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.765028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.776856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.777360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.777390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.777399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.777564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.777717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.777723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.777728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.777735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.789426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.790005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.790035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.790044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.790216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.790370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.790376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.790381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.853 [2024-11-28 13:10:11.790387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.853 [2024-11-28 13:10:11.802087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.853 [2024-11-28 13:10:11.802677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.853 [2024-11-28 13:10:11.802707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.853 [2024-11-28 13:10:11.802716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.853 [2024-11-28 13:10:11.802886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.853 [2024-11-28 13:10:11.803039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.853 [2024-11-28 13:10:11.803045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.853 [2024-11-28 13:10:11.803050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.803056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.814744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.815382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.815413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.815422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.815587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.815740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.815746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.815752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.815757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.827321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.827922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.827952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.827961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.828127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.828286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.828293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.828299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.828305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 4716.83 IOPS, 18.43 MiB/s [2024-11-28T12:10:11.981Z] [2024-11-28 13:10:11.839992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.840576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.840607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.840616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.840781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.840934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.840943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.840949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.840954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.852655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.853129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.853145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.853150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.853306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.853457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.853462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.853467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.853472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.865289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.865782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.865795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.865800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.865950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.866100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.866106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.866111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.866116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.877930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.878424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.878437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.878442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.878592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.878742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.878747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.878752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.878757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.890613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.891118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.891132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.891138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.891293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.891444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.891450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.891455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.891459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.903278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.903736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.903765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.903774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.903940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.904093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.904099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.904105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.904110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.915957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.916434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.916449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.916455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.916606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.916755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.916761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.854 [2024-11-28 13:10:11.916766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.854 [2024-11-28 13:10:11.916771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.854 [2024-11-28 13:10:11.928604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.854 [2024-11-28 13:10:11.929070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.854 [2024-11-28 13:10:11.929087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.854 [2024-11-28 13:10:11.929092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.854 [2024-11-28 13:10:11.929247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.854 [2024-11-28 13:10:11.929398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.854 [2024-11-28 13:10:11.929404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.855 [2024-11-28 13:10:11.929408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.855 [2024-11-28 13:10:11.929413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.855 [2024-11-28 13:10:11.941242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.855 [2024-11-28 13:10:11.941716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.855 [2024-11-28 13:10:11.941728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.855 [2024-11-28 13:10:11.941734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.855 [2024-11-28 13:10:11.941884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.855 [2024-11-28 13:10:11.942034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.855 [2024-11-28 13:10:11.942039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.855 [2024-11-28 13:10:11.942044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.855 [2024-11-28 13:10:11.942049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.855 [2024-11-28 13:10:11.953878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.855 [2024-11-28 13:10:11.954262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.855 [2024-11-28 13:10:11.954292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.855 [2024-11-28 13:10:11.954301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.855 [2024-11-28 13:10:11.954469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.855 [2024-11-28 13:10:11.954622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.855 [2024-11-28 13:10:11.954628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.855 [2024-11-28 13:10:11.954634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.855 [2024-11-28 13:10:11.954639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:41.855 [2024-11-28 13:10:11.966489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:41.855 [2024-11-28 13:10:11.967079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.855 [2024-11-28 13:10:11.967110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:41.855 [2024-11-28 13:10:11.967119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:41.855 [2024-11-28 13:10:11.967295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:41.855 [2024-11-28 13:10:11.967448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:41.855 [2024-11-28 13:10:11.967456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:41.855 [2024-11-28 13:10:11.967463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:41.855 [2024-11-28 13:10:11.967470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:11.979169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:11.979786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:11.979816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:11.979825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:11.979991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:11.980144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:11.980151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:11.980164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:11.980170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:11.991858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:11.992322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:11.992338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:11.992344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:11.992494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:11.992644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:11.992650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:11.992655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:11.992660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.004554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.005018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.005032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.005037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:12.005191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:12.005341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:12.005352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:12.005357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:12.005362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.017196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.017583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.017596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.017601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:12.017751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:12.017900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:12.017907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:12.017912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:12.017916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.029895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.030285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.030315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.030324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:12.030492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:12.030645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:12.030651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:12.030657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:12.030662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.042526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.043144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.043180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.043189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:12.043357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:12.043511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:12.043517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:12.043522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:12.043528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.055220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.055821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.055851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.055860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:12.056026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:12.056184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:12.056190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:12.056196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:12.056202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.067901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.068374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.068390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.068395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.118 [2024-11-28 13:10:12.068546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.118 [2024-11-28 13:10:12.068696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.118 [2024-11-28 13:10:12.068702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.118 [2024-11-28 13:10:12.068707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.118 [2024-11-28 13:10:12.068712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.118 [2024-11-28 13:10:12.080559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.118 [2024-11-28 13:10:12.080876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.118 [2024-11-28 13:10:12.080891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.118 [2024-11-28 13:10:12.080896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.081046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.081203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.081210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.081215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.081220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.093194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.093747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.093782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.093790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.093956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.094108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.094115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.094120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.094125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.105823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.106440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.106470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.106479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.106645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.106798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.106804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.106810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.106815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.118515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.119098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.119129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.119139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.119313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.119466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.119473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.119478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.119484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.131226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.131808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.131838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.131846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.132016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.132176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.132183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.132189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.132196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.143899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.144491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.144522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.144531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.144699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.144851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.144858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.144863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.144869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.156566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.157112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.157151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.157323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.157476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.157482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.157487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.157493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.169192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.169771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.169801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.169810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.169975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.170128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.170137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.170143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.170148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.181854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.182365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.182395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.182404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.182570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.182723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.182729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.182735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.182741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.194434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.194935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.194950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.194955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.119 [2024-11-28 13:10:12.195105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.119 [2024-11-28 13:10:12.195261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.119 [2024-11-28 13:10:12.195267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.119 [2024-11-28 13:10:12.195272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.119 [2024-11-28 13:10:12.195277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.119 [2024-11-28 13:10:12.207103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.119 [2024-11-28 13:10:12.207454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.119 [2024-11-28 13:10:12.207467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.119 [2024-11-28 13:10:12.207472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.120 [2024-11-28 13:10:12.207622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.120 [2024-11-28 13:10:12.207772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.120 [2024-11-28 13:10:12.207777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.120 [2024-11-28 13:10:12.207782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.120 [2024-11-28 13:10:12.207787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.120 [2024-11-28 13:10:12.219762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.120 [2024-11-28 13:10:12.220235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.120 [2024-11-28 13:10:12.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.120 [2024-11-28 13:10:12.220254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.120 [2024-11-28 13:10:12.220404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.120 [2024-11-28 13:10:12.220554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.120 [2024-11-28 13:10:12.220560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.120 [2024-11-28 13:10:12.220565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.120 [2024-11-28 13:10:12.220570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.120 [2024-11-28 13:10:12.232410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.120 [2024-11-28 13:10:12.232867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.120 [2024-11-28 13:10:12.232880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.120 [2024-11-28 13:10:12.232885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.120 [2024-11-28 13:10:12.233035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.120 [2024-11-28 13:10:12.233189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.120 [2024-11-28 13:10:12.233195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.120 [2024-11-28 13:10:12.233200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.120 [2024-11-28 13:10:12.233206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.382 [2024-11-28 13:10:12.245041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.382 [2024-11-28 13:10:12.245518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.382 [2024-11-28 13:10:12.245532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.382 [2024-11-28 13:10:12.245538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.382 [2024-11-28 13:10:12.245688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.382 [2024-11-28 13:10:12.245838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.382 [2024-11-28 13:10:12.245844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.382 [2024-11-28 13:10:12.245849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.382 [2024-11-28 13:10:12.245854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.382 [2024-11-28 13:10:12.257680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.382 [2024-11-28 13:10:12.258198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.382 [2024-11-28 13:10:12.258233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.382 [2024-11-28 13:10:12.258242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.382 [2024-11-28 13:10:12.258410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.382 [2024-11-28 13:10:12.258563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.382 [2024-11-28 13:10:12.258570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.382 [2024-11-28 13:10:12.258575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.382 [2024-11-28 13:10:12.258581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.382 [2024-11-28 13:10:12.270293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.382 [2024-11-28 13:10:12.270655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.382 [2024-11-28 13:10:12.270670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.382 [2024-11-28 13:10:12.270676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.382 [2024-11-28 13:10:12.270826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.382 [2024-11-28 13:10:12.270976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.382 [2024-11-28 13:10:12.270981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.382 [2024-11-28 13:10:12.270986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.382 [2024-11-28 13:10:12.270991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.382 [2024-11-28 13:10:12.282967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.382 [2024-11-28 13:10:12.283414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.382 [2024-11-28 13:10:12.283445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.382 [2024-11-28 13:10:12.283454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.382 [2024-11-28 13:10:12.283619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.382 [2024-11-28 13:10:12.283772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.382 [2024-11-28 13:10:12.283778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.382 [2024-11-28 13:10:12.283784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.382 [2024-11-28 13:10:12.283790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.382 [2024-11-28 13:10:12.295631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.382 [2024-11-28 13:10:12.296230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.382 [2024-11-28 13:10:12.296261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.296270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.296442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.296594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.296601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.383 [2024-11-28 13:10:12.296606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.383 [2024-11-28 13:10:12.296611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:42.383 [2024-11-28 13:10:12.308320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.383 [2024-11-28 13:10:12.308901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.383 [2024-11-28 13:10:12.308931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.308940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.309106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.309265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.309272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.383 [2024-11-28 13:10:12.309278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.383 [2024-11-28 13:10:12.309283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.383 [2024-11-28 13:10:12.320980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.383 [2024-11-28 13:10:12.321364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.383 [2024-11-28 13:10:12.321379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.321385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.321535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.321686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.321692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.383 [2024-11-28 13:10:12.321697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.383 [2024-11-28 13:10:12.321702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.383 [2024-11-28 13:10:12.333691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.383 [2024-11-28 13:10:12.334197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.383 [2024-11-28 13:10:12.334211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.334224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.334374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.334524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.334530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.383 [2024-11-28 13:10:12.334535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.383 [2024-11-28 13:10:12.334539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:42.383 [2024-11-28 13:10:12.346386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.383 [2024-11-28 13:10:12.346923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.383 [2024-11-28 13:10:12.346954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.346962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.347128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.347287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.347294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.383 [2024-11-28 13:10:12.347300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.383 [2024-11-28 13:10:12.347306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.383 [2024-11-28 13:10:12.348168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.383 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:42.383 [2024-11-28 13:10:12.358991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.383 [2024-11-28 13:10:12.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.383 [2024-11-28 13:10:12.359614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.359623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.359788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.359941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.359947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.383 [2024-11-28 13:10:12.359957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.383 [2024-11-28 13:10:12.359962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.383 [2024-11-28 13:10:12.371666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.383 [2024-11-28 13:10:12.372313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.383 [2024-11-28 13:10:12.372344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.383 [2024-11-28 13:10:12.372352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.383 [2024-11-28 13:10:12.372519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.383 [2024-11-28 13:10:12.372671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.383 [2024-11-28 13:10:12.372677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.384 [2024-11-28 13:10:12.372683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.384 [2024-11-28 13:10:12.372689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.384 Malloc0 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:42.384 [2024-11-28 13:10:12.384242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.384 [2024-11-28 13:10:12.384733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.384 [2024-11-28 13:10:12.384763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.384 [2024-11-28 13:10:12.384772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.384 [2024-11-28 13:10:12.384938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.384 [2024-11-28 13:10:12.385090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.384 [2024-11-28 13:10:12.385097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.384 [2024-11-28 13:10:12.385102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.384 [2024-11-28 13:10:12.385108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:42.384 [2024-11-28 13:10:12.396942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.384 [2024-11-28 13:10:12.397397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.384 [2024-11-28 13:10:12.397413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.384 [2024-11-28 13:10:12.397419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.384 [2024-11-28 13:10:12.397573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.384 [2024-11-28 13:10:12.397724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.384 [2024-11-28 13:10:12.397729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.384 [2024-11-28 13:10:12.397735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.384 [2024-11-28 13:10:12.397740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:42.384 [2024-11-28 13:10:12.409567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.384 [2024-11-28 13:10:12.410024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.384 [2024-11-28 13:10:12.410037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2393bd0 with addr=10.0.0.2, port=4420 00:39:42.384 [2024-11-28 13:10:12.410043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393bd0 is same with the state(6) to be set 00:39:42.384 [2024-11-28 13:10:12.410198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393bd0 (9): Bad file descriptor 00:39:42.384 [2024-11-28 13:10:12.410348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:42.384 [2024-11-28 13:10:12.410353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:42.384 [2024-11-28 13:10:12.410358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:42.384 [2024-11-28 13:10:12.410363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:42.384 [2024-11-28 13:10:12.410766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.384 13:10:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3673344 00:39:42.384 [2024-11-28 13:10:12.422206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:42.645 [2024-11-28 13:10:12.539784] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:39:43.849 4615.57 IOPS, 18.03 MiB/s [2024-11-28T12:10:14.916Z] 5630.50 IOPS, 21.99 MiB/s [2024-11-28T12:10:15.856Z] 6431.78 IOPS, 25.12 MiB/s [2024-11-28T12:10:17.240Z] 7040.50 IOPS, 27.50 MiB/s [2024-11-28T12:10:18.181Z] 7561.00 IOPS, 29.54 MiB/s [2024-11-28T12:10:19.122Z] 7997.58 IOPS, 31.24 MiB/s [2024-11-28T12:10:20.063Z] 8366.00 IOPS, 32.68 MiB/s [2024-11-28T12:10:21.003Z] 8671.21 IOPS, 33.87 MiB/s 00:39:50.876 Latency(us) 00:39:50.876 [2024-11-28T12:10:21.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.876 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:50.876 Verification LBA range: start 0x0 length 0x4000 00:39:50.876 Nvme1n1 : 15.01 8925.09 34.86 13563.87 0.00 5672.63 739.00 16203.35 00:39:50.876 [2024-11-28T12:10:21.003Z] =================================================================================================================== 00:39:50.876 [2024-11-28T12:10:21.003Z] Total : 8925.09 34.86 13563.87 0.00 5672.63 739.00 16203.35 00:39:50.876 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:50.876 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:50.876 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.876 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:50.876 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.876 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:50.877 13:10:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:50.877 rmmod nvme_tcp 00:39:50.877 rmmod nvme_fabrics 00:39:50.877 rmmod nvme_keyring 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3674388 ']' 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3674388 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3674388 ']' 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3674388 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674388 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674388' 00:39:51.137 killing process with pid 3674388 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3674388 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3674388 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:51.137 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.138 13:10:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:53.685 00:39:53.685 real 0m28.262s 00:39:53.685 user 1m2.872s 00:39:53.685 sys 0m7.566s 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:53.685 ************************************ 00:39:53.685 END TEST nvmf_bdevperf 00:39:53.685 ************************************ 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.685 ************************************ 00:39:53.685 START TEST nvmf_target_disconnect 00:39:53.685 ************************************ 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:53.685 * Looking for test storage... 00:39:53.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:53.685 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:53.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.686 --rc genhtml_branch_coverage=1 00:39:53.686 --rc genhtml_function_coverage=1 00:39:53.686 --rc genhtml_legend=1 00:39:53.686 --rc geninfo_all_blocks=1 00:39:53.686 --rc geninfo_unexecuted_blocks=1 00:39:53.686 00:39:53.686 ' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:53.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.686 --rc genhtml_branch_coverage=1 00:39:53.686 --rc genhtml_function_coverage=1 00:39:53.686 --rc genhtml_legend=1 00:39:53.686 --rc geninfo_all_blocks=1 00:39:53.686 --rc geninfo_unexecuted_blocks=1 00:39:53.686 00:39:53.686 ' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:53.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.686 --rc genhtml_branch_coverage=1 00:39:53.686 --rc genhtml_function_coverage=1 00:39:53.686 --rc genhtml_legend=1 00:39:53.686 --rc geninfo_all_blocks=1 00:39:53.686 --rc geninfo_unexecuted_blocks=1 00:39:53.686 00:39:53.686 ' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:53.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.686 --rc genhtml_branch_coverage=1 00:39:53.686 --rc genhtml_function_coverage=1 00:39:53.686 --rc genhtml_legend=1 00:39:53.686 --rc geninfo_all_blocks=1 00:39:53.686 --rc geninfo_unexecuted_blocks=1 00:39:53.686 00:39:53.686 ' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:53.686 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:53.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:39:53.687 13:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:01.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:01.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:01.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:01.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:01.831 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:01.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:01.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:40:01.831 00:40:01.831 --- 10.0.0.2 ping statistics --- 00:40:01.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.832 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:01.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:01.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:40:01.832 00:40:01.832 --- 10.0.0.1 ping statistics --- 00:40:01.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.832 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:01.832 13:10:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:01.832 ************************************ 00:40:01.832 START TEST nvmf_target_disconnect_tc1 00:40:01.832 ************************************ 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:01.832 [2024-11-28 13:10:31.300658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.832 [2024-11-28 13:10:31.300745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1cfa0 with addr=10.0.0.2, port=4420 00:40:01.832 [2024-11-28 13:10:31.300773] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:01.832 [2024-11-28 13:10:31.300793] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:01.832 [2024-11-28 13:10:31.300802] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:40:01.832 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:40:01.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:40:01.832 Initializing NVMe Controllers 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:01.832 00:40:01.832 real 0m0.247s 00:40:01.832 user 0m0.055s 00:40:01.832 sys 0m0.091s 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:01.832 ************************************ 00:40:01.832 END TEST nvmf_target_disconnect_tc1 00:40:01.832 ************************************ 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:01.832 ************************************ 00:40:01.832 START TEST nvmf_target_disconnect_tc2 00:40:01.832 ************************************ 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3680436 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3680436 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3680436 ']' 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:01.832 13:10:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.832 [2024-11-28 13:10:31.460582] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:40:01.832 [2024-11-28 13:10:31.460632] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:01.832 [2024-11-28 13:10:31.593839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:01.832 [2024-11-28 13:10:31.653906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:01.832 [2024-11-28 13:10:31.681867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:01.832 [2024-11-28 13:10:31.681914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:01.832 [2024-11-28 13:10:31.681922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:01.832 [2024-11-28 13:10:31.681929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:01.832 [2024-11-28 13:10:31.681935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:01.832 [2024-11-28 13:10:31.683845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:01.832 [2024-11-28 13:10:31.684003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:01.832 [2024-11-28 13:10:31.684131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:01.832 [2024-11-28 13:10:31.684131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 Malloc0 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 [2024-11-28 13:10:32.374901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 [2024-11-28 13:10:32.415261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3680616 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:40:02.404 13:10:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:04.318 13:10:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3680436 00:40:04.318 13:10:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Read completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 Write completed with error (sct=0, sc=8) 00:40:04.590 starting I/O failed 00:40:04.590 [2024-11-28 13:10:34.450618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:40:04.590 [2024-11-28 13:10:34.451001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.451027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.451445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.451483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.451788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.451800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.452152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.452171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.452567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.452605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.452854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.452866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.453431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.453469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.453765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.453779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.453969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.453980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.454166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.590 [2024-11-28 13:10:34.454177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.590 qpair failed and we were unable to recover it. 00:40:04.590 [2024-11-28 13:10:34.454536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.454547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.454848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.454859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.455150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.455164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.455507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.455518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.455719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.455729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.456059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.456070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.456371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.456383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.456683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.456693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.457016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.457026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.457331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.457342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.457644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.457655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.457835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.457848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.458191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.458204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.458572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.458582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.458859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.458870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.459213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.459223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.459554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.459565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.459885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.459896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.460095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.460105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.460425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.460436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.460766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.460777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.461073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.461084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.461313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.461324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.461625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.461635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.461942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.461955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.591 qpair failed and we were unable to recover it. 00:40:04.591 [2024-11-28 13:10:34.462259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.591 [2024-11-28 13:10:34.462270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.462591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.462602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.462922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.462933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.463224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.463235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.463425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.463435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.463679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.463689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.463998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.464009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.464329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.464341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.464666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.464677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.465006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.465017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.465318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.465329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.465648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.465658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.465973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.465982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.466273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.466283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.466635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.466645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.466928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.466939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.467092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.467306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.467317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.467625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.467635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.467941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.467951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.468237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.468248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.468549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.468558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.468826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.468836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.469128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.469138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.469537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.469549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.469743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.469753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.592 qpair failed and we were unable to recover it. 00:40:04.592 [2024-11-28 13:10:34.470119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.592 [2024-11-28 13:10:34.470129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.470431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.470441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.470826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.470836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.471135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.471145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.471475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.471485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.471855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.471865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.472165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.472176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.472472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.472482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.472785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.472794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.473073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.473083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.473429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.473443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.473743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.473756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.474044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.474058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.474384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.474399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.474773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.474784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.475121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.475133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.475390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.475403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.475718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.475730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.475978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.475990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.476365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.476378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.476683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.476694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.476992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.477004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.477191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.477204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.477559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.477571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.477860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.477873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.478171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.478184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.593 [2024-11-28 13:10:34.478569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.593 [2024-11-28 13:10:34.478581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.593 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.478855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.478868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.479163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.479176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.479531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.479543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.479910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.479922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.480226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.480239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.480584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.480597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.480897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.480909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.482105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.482133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.482412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.482429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.482727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.482740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.483048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.483061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.483247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.483260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.483488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.483500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.483798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.483811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.484096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.484108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.484389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.484403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.484728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.484741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.485043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.485056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.485373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.485386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.485561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.485577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.485815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.485831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.486138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.486155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.486469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.486486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.486780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.486798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.487104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.594 [2024-11-28 13:10:34.487437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.594 [2024-11-28 13:10:34.487455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.594 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.487756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.487778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.488075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.488092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.488308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.488326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.488643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.488659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.488975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.488992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.489304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.489320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.489631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.489646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.489948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.489963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.490308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.490326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.490650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.490665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.490978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.490995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.491309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.491326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.491529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.491548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.491874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.491890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.492214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.492232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.492554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.492570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.492868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.492883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.493233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.493250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.493607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.493623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.493997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.494012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.494322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.494338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.494651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.494666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.494970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.494986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.495301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.495318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.495624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.495640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.495956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.595 [2024-11-28 13:10:34.495973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.595 qpair failed and we were unable to recover it. 00:40:04.595 [2024-11-28 13:10:34.496285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.496302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.496592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.496609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.496928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.496944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.497234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.497251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.497570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.497893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.497913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.498225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.498248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.498615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.498635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.498854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.498877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.499103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.499122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.499480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.499503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.499808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.499827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.500169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.500191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.500520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.500541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.500873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.500897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.501229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.501250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.501587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.501610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.501950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.501971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.502282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.502303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.502559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.502579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.502911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.502931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.596 [2024-11-28 13:10:34.503271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.596 [2024-11-28 13:10:34.503293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.596 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.503629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.503649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.503963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.503983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.504335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.504356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.504661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.504682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.505009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.505030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.505401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.505422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.505759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.505780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.506099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.506120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.506334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.506358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.506674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.506694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.507004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.507025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.507279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.507631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.507651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.507955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.507976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.508308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.508338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.508686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.508715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.509067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.509095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.509424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.509453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.509800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.509827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.510177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.510206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.510549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.510933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.510961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.511338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.511673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.511700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.512086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.597 [2024-11-28 13:10:34.512341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.597 [2024-11-28 13:10:34.512371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.597 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.512734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.512761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.513155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.513192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.513527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.513556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.513913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.513940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.514289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.514317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.514651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.514679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.515016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.515049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.515392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.515421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.515765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.515794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.516110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.516138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.516487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.516517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.516863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.516890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.517225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.517255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.517616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.517644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.517989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.518018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.518366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.518395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.518738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.518766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.519182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.519211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.519535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.519562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.519953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.519981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.520352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.520791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.520819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.521182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.521211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.521556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.521583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.521931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.521958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.522322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.598 [2024-11-28 13:10:34.522351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.598 qpair failed and we were unable to recover it. 00:40:04.598 [2024-11-28 13:10:34.522693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.522722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.523067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.523095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.523911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.523955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.524323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.524705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.524735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.525043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.525071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.525403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.525433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.525784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.525813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.526140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.526186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.526535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.526563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.526904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.526932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.527273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.527303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.527656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.527683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.528030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.528058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.528420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.528451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.528799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.528828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.529182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.529212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.529569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.529597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.529942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.529969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.530336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.530366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.530708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.530741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.531092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.531120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.531435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.531464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.531827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.531854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.532152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.532187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.532459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.532491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.599 qpair failed and we were unable to recover it. 00:40:04.599 [2024-11-28 13:10:34.532857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.599 [2024-11-28 13:10:34.532884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.533213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.533242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.533661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.533689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.534028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.534056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.534431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.534460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.534814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.534841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.535221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.535251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.535583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.535612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.535940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.535968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.536215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.536243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.536598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.536626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.536971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.536999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.537336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.537372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.537763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.537792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.538138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.538174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.538512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.538541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.538878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.538906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.539273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.539302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.539653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.539681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.540056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.540084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.540413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.540443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.540779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.540807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.541156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.541192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.541561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.541891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.541919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.542274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.542303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.600 [2024-11-28 13:10:34.542551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.600 [2024-11-28 13:10:34.542578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.600 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.542931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.542959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.543321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.543349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.543698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.543725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.544074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.544101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.544345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.544378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.544611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.544639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.544874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.544902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.545170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.545210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.545562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.545590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.545913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.545941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.546287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.546316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.546681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.546709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.547102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.547130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.547547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.547576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.547926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.547953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.548289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.548318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.548680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.548708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.549047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.549074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.549407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.549436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.549788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.549816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.550173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.550202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.550535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.550563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.550900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.550927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.551227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.551257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.551596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.551624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.551982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.601 [2024-11-28 13:10:34.552010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.601 qpair failed and we were unable to recover it. 00:40:04.601 [2024-11-28 13:10:34.552343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.552372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.552751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.552778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.553206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.553234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.553509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.553536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.553903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.553930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.554290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.554320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.554674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.554702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.555041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.555068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.555406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.555441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.555784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.555811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.556059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.556091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.556505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.556536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.556863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.556891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.557248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.557277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.557629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.557657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.557999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.558026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.558274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.558306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.558634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.558663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.559018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.559046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.559383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.559413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.559762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.559790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.560151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.560188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.560536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.560564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.560918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.560946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.561288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.561316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.561661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.561688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.602 qpair failed and we were unable to recover it. 00:40:04.602 [2024-11-28 13:10:34.562044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.602 [2024-11-28 13:10:34.562071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.562307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.562338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.562480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.562511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.562881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.562910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.563256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.563286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.563640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.563668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.564018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.564046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.564516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.564546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.564879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.564909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.565250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.565279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.565565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.565594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.565848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.565876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.566204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.566232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.566630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.566658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.566980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.567007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.567372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.567400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.567750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.567777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.568123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.568152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.568507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.568534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.568888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.568917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.569278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.569308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.569523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.603 [2024-11-28 13:10:34.569553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.603 qpair failed and we were unable to recover it. 00:40:04.603 [2024-11-28 13:10:34.569916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.569951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.570204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.570233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.570604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.570631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.570982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.571009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.571375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.571403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.571759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.571787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.572147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.572196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.572530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.572558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.572923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.572950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.573314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.573342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.573680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.573707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.574089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.574118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.574519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.574548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.574884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.574911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.575278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.575308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.575670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.575697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.576062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.576089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.576444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.576473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.576826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.576854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.577197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.577225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.577547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.577575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.577926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.577953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.578293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.578322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.578680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.578708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.579059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.579087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.579424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.579453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.579811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.604 [2024-11-28 13:10:34.579839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.604 qpair failed and we were unable to recover it. 00:40:04.604 [2024-11-28 13:10:34.580193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.580222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.580555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.580583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.580907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.580935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.581332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.581361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.581720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.581748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.582096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.582123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.582487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.582516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.582866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.582894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.583225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.583254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.583618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.583646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.583995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.584023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.584419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.584447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.584819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.584847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.585203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.585626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.585653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.585985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.586014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.586337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.586366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.586740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.586767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.587155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.587551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.587581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.587924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.587952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.588287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.588317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.588666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.588694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.589048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.589075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.589426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.589454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.589870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.589898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.605 qpair failed and we were unable to recover it. 00:40:04.605 [2024-11-28 13:10:34.590212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.605 [2024-11-28 13:10:34.590241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.590601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.590631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.590995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.591022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.591274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.591303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.591681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.591709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.592050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.592078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.592331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.592360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.592724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.592753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.593105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.593134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.593501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.593529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.593892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.593920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.594272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.594301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.594632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.594661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.595027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.595055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.595402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.595432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.595773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.595800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.596179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.596208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.596565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.596593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.596931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.596960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.597302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.597331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.597692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.597720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.598065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.598092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.598430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.598458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.598802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.598830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.599186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.599216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.599571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.599599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.599964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.606 [2024-11-28 13:10:34.599991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.606 qpair failed and we were unable to recover it. 00:40:04.606 [2024-11-28 13:10:34.600363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.600398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.600747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.600775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.601028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.601055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.601414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.601443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.601837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.601866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.602225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.602254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.602605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.602634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.602983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.603011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.603352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.603383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.603660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.603688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.604035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.604064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.604428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.604457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.604792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.604820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.605093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.605120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.605509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.605539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.605789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.606156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.606192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.606525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.606554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.606910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.606938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.607290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.607320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.607670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.607698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.607946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.607976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.608322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.608352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.608603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.608635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.609022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.609050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.609388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.609418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.607 [2024-11-28 13:10:34.609751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.607 [2024-11-28 13:10:34.609779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.607 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.610139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.610193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.610557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.610585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.610935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.610964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.611318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.611348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.611694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.611721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.612075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.612102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.612465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.612494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.612847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.612875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.613221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.613249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.613639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.614021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.614051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.614455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.614484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.614805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.614832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.615182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.615218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.615545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.615573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.615891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.615919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.616274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.616304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.616664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.616691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.617060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.617087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.617394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.617423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.617755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.617783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.618136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.618179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.618520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.618549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.618917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.618946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.619329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.619358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.619687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.619716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.608 [2024-11-28 13:10:34.620077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.608 [2024-11-28 13:10:34.620105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.608 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.620480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.620510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.620849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.620878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.621221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.621250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.621639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.621666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.621990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.622017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.622251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.622280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.622638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.622666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.623039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.623067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.623408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.623438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.623791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.623819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.624179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.624208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.624558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.624585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.624961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.624988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.625235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.625265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.625622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.625650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.626042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.626400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.626430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.626780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.626808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.627173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.627202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.627540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.627569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.627946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.627974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.628327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.628355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.609 qpair failed and we were unable to recover it. 00:40:04.609 [2024-11-28 13:10:34.628773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.609 [2024-11-28 13:10:34.628800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.629148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.629185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.629541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.629569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.629945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.629973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.630340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.630375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.630722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.630750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.631094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.631122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.631422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.631452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.631811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.631838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.632094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.632125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.632473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.632503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.632852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.632881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.633114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.633145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.633446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.633475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.633834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.633862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.634214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.634243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.634598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.634625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.634993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.635021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.635387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.635417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.635770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.635797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.636141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.636178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.636527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.636554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.636792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.636823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.637249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.637281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.637612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.637641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.638009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.610 [2024-11-28 13:10:34.638036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.610 qpair failed and we were unable to recover it. 00:40:04.610 [2024-11-28 13:10:34.638386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.638417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.638794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.638822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.638947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.638978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.639371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.639401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.639730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.639757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.640118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.640335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.640363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.640700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.640728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.641079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.641107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.641517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.641546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.641869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.641898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.642256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.642286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.642534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.642561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.642907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.642934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.643288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.643317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.643666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.643695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.644027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.644054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.644348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.644377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.644729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.644764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.645116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.645143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.645500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.645529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.645868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.645896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.646253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.646282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.646640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.646668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.647042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.647070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.647425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.647455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.647810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.647838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.648191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.648221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.611 qpair failed and we were unable to recover it. 00:40:04.611 [2024-11-28 13:10:34.648575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.611 [2024-11-28 13:10:34.648602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.648982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.649011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.649362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.649391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.649739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.649767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.650123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.650152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.650472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.650501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.650833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.650861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.651217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.651246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.651587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.651616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.651971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.651999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.652401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.652431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.652762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.652791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.653167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.653196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.653531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.653559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.653818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.653846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.654240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.654634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.654662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.655013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.655041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.655388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.655419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.655806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.655834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.656192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.656223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.656556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.656586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.656918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.656945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.657301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.657330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.657696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.657724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.658070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.658098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.658465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.658494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.658871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.612 [2024-11-28 13:10:34.659236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.612 [2024-11-28 13:10:34.659266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.612 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.659529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.659557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.659905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.659938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.660301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.660332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.660692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.660719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.661138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.661176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.661563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.661591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.661968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.661995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.662375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.662404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.662751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.662779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.663139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.663177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.663530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.663558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.664000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.664029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.664381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.664411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.664766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.664793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.665175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.665205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.665520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.665548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.665912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.665941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.666290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.666320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.666679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.666706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.667065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.667094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.667375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.667404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.667759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.667788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.668148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.668189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.668527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.668556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.668824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.668852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.669199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.669228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.669559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.669588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.669957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.613 [2024-11-28 13:10:34.669984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.613 qpair failed and we were unable to recover it. 00:40:04.613 [2024-11-28 13:10:34.670454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.670484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.670816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.670845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.671181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.671211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.671549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.671578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.671940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.671967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.672212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.672241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.672628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.672980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.673008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.673375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.673404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.673750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.673778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.674137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.674171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.674536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.674565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.674902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.674930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.675294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.675329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.675676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.675704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.676060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.676088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.676443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.676472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.676830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.676858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.677220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.677251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.677614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.677641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.678012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.678039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.678386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.678415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.678767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.678794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.679140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.679175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.679585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.679940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.680316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.680344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.680698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.680727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.614 qpair failed and we were unable to recover it. 00:40:04.614 [2024-11-28 13:10:34.681089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.614 [2024-11-28 13:10:34.681117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.681471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.681500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.681849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.681880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.682238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.682268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.682587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.682616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.683050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.683078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.683447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.683476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.683831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.683858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.684223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.684251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.684598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.684626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.684993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.685021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.685383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.685412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.685792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.685820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.686172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.686202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.686534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.686562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.686804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.686831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.687187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.687216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.687468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.687495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.687862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.687890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.688253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.688282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.688541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.688569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.688973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.689000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.689355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.689385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.689724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.689752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.690020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.690047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.690451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.690486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.690850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.691070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.691098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.615 qpair failed and we were unable to recover it. 00:40:04.615 [2024-11-28 13:10:34.691484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.615 [2024-11-28 13:10:34.691514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.691868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.691896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.692256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.692285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.692653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.692680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.693039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.693066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.693422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.693451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.693709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.693736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.694170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.694199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.694556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.694584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.694847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.694874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.695219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.695248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.695591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.695620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.696022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.696050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.696296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.696324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.696668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.696697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.697054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.697081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.697440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.697469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.697821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.698214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.698244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.698687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.698714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.699044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.699073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.699413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.699442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.699799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.699828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.700185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.616 [2024-11-28 13:10:34.700214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.616 qpair failed and we were unable to recover it. 00:40:04.616 [2024-11-28 13:10:34.700574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.700602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.700965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.701000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.701409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.701753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.701781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.702177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.702206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.702585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.702613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.702977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.703004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.703418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.703448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.703698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.703725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.704092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.704120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.617 [2024-11-28 13:10:34.704541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.617 [2024-11-28 13:10:34.704571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.617 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.704915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.704945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.705292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.705322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.705693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.705727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.706091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.706118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.706496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.706526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.706901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.706928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.707280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.707309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.707669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.707697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.707936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.707967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.708227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.708257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.708479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.708506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.708856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.708884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.709251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.709281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.709640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.709668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.710024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.710053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.710487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.710517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.710909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.710937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.711281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.711311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.711674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.711701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.712072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.712100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.712445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.712475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.712706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.712733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.713101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.713129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.713508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.713537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.713982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.714010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.714379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.714409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.714678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.714706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.715094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.715384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.715414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.715786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.715815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.716096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.716123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.716515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.716545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.716918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.716948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.717199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.717232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.717480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.717508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.717902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.717931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.718193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.718222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.718576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.718605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.718987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.719016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.719381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.719410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.719753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.719781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.720146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.720183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.720641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.720676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.720939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.720966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.721353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.721381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.721778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.721806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.722103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.722131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.722468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.722497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.722855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.722884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.723253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.723283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.723671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.723700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.724082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.724110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.724487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.724516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.724897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.724925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.725285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.725314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.889 [2024-11-28 13:10:34.725678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.889 [2024-11-28 13:10:34.725707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.889 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.726067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.726096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.726468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.726498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.726862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.726889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.727235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.727265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.727507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.727534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.727901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.727929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.728297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.728327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.728541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.728569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.728965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.728992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.729379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.729409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.729761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.729789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.730039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.730067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.730425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.730455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.730817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.730847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.731203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.731232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.731490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.731517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.731771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.731804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.732165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.732195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.732533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.732563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.732936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.732964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.733336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.733367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.733803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.733831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.734177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.734208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.734587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.734615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.734978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.735006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.735372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.735401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.735773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.735807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.736150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.736188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.736408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.736436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.736818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.736845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.737208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.737238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.737519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.737550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.737902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.737932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.738319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.738349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.738824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.739182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.739212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.739565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.739593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.739961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.739989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.740261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.740289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.740535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.740564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.740814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.741222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.741251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.741506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.741534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.741911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.741938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.742304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.742333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.742700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.742728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.742968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.743410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.743439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.890 qpair failed and we were unable to recover it. 00:40:04.890 [2024-11-28 13:10:34.743801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.890 [2024-11-28 13:10:34.743831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.744182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.744212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.744571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.744599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.744966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.744993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.745300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.745329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.745645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.745679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.746040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.746069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.746287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.746316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.746728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.746757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.747132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.747169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.747389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.747418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.747799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.747828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.748193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.748222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.748583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.748610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.748859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.748890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.749283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.749313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.749701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.749729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.750119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.750146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.750453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.750482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.750708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.750736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.751089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.751117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.751532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.751893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.751920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.752156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.752194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.752532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.752560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.752923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.752958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.753303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.753333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.753701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.753729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.754157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.754520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.754548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.754935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.755323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.755352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.755595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.755627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.756002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.756031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.756406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.756437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.756809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.756839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.757105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.757133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.757372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.757401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.757684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.757713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.758064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.758092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.758479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.758509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.758879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.758907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.759271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.759669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.759697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.760074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.760101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.760467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.760502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.760870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.760898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.761287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.761318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.761667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.891 [2024-11-28 13:10:34.761697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.891 qpair failed and we were unable to recover it. 00:40:04.891 [2024-11-28 13:10:34.762046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.762075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.762352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.762383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.762730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.762758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.762977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.763005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.763258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.763288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.763644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.763671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.764077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.764105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.764514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.764544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.764911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.764942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.765313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.765343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.765662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.765690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.766058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.766087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.766461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.766491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.766767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.766795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.767179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.767208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.767563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.767592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.767977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.768005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.768249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.768279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.768569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.768598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.768948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.768976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.769255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.769285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.769642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.769670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.770016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.770046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.770410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.770441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.770807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.770835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.771198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.771228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.771624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.771652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.771902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.771933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.772316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.772346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.772716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.772745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.773084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.773113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.773565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.773594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.773956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.773985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.774353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.774382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.774740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.774769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.775121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.775149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.775411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.775445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.775844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.775873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.776231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.892 [2024-11-28 13:10:34.776635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.892 [2024-11-28 13:10:34.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.892 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.777045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.777073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.777452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.777481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.777841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.777869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.778231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.778261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.778713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.778740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.779101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.779128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.779518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.779546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.779903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.779931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.780205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.780562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.780589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.780839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.780870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.781231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.781261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.781670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.781698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.782049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.782084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.782419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.782448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.782814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.782843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.783205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.783235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.783632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.783658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.784020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.784047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.784469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.784497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.784879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.784907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.785290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.785319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.785563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.785594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.785964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.785992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.786336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.786367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.786716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.786743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.787097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.787124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.787496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.787524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.787869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.787897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.788261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.788292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.788748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.788775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.789107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.789135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.789514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.789543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.789900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.789927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.790289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.790319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.790679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.790707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.791078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.791111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.791497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.791526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.791890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.791917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.792300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.792330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.792696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.792723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.793095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.793123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.793498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.793527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.793875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.793904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.794272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.794301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.794658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.794686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.795061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.795088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.795450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.795479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.795859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.796225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.796255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.796497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.796527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.796873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.796901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.797259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.893 [2024-11-28 13:10:34.797288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.893 qpair failed and we were unable to recover it. 00:40:04.893 [2024-11-28 13:10:34.797660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.797688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.797954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.797981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.798325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.798354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.798719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.798747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.799114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.799141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.799506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.799534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.799899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.799926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.800337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.800367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.800739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.800766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.801128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.801155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.801552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.801581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.801945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.801974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.802343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.802372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.802732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.802759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.803140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.803175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.803593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.803620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.803995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.804022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.804384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.804414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.804776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.804804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.805180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.805211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.805570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.805597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.805859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.805889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.806255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.806284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.806640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.806677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.807029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.807415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.807444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.807809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.807837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.808206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.808587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.808615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.808981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.809009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.809393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.809421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.809791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.809818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.810176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.810206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.810552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.810580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.810943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.810970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.811322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.811351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.811719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.811746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.812150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.812535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.812565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.812892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.812920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.813281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.813312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.813677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.813705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.814064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.814093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.814440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.814469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.814843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.814871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.815129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.815167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.815530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.815559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.815937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.894 [2024-11-28 13:10:34.815965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.894 qpair failed and we were unable to recover it. 00:40:04.894 [2024-11-28 13:10:34.816226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.816255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.816629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.816658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.817027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.817056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.817413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.817444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.817804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.817832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.818197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.818226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.818592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.818619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.818844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.818876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.819224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.819259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.819643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.819671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.820011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.820040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.820397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.820426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.820793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.820820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.821192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.821221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.821461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.821488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.821932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.821966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.822332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.822362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.822719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.822748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.823088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.823115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.823468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.823497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.823874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.823903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.824152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.824205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.824548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.824578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.824936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.824964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.825331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.825361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.825720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.825747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.826106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.826134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.826394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.826422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.826778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.826806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.827187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.827217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.827576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.827604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.827974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.828002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.828381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.828410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.828769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.828797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.829155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.829195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.829540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.829922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.829949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.830317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.830348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.830714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.830742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.831102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.831131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.831546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.831575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.831948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.831977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.832328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.832357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.895 qpair failed and we were unable to recover it. 00:40:04.895 [2024-11-28 13:10:34.832721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.895 [2024-11-28 13:10:34.832749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.833103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.833130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.833484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.833512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.833876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.833904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.834272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.834300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.834669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.834697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.835051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.835078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.835432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.835461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.835822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.835850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.836218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.836247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.836628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.836656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.837026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.837054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.837423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.837457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.837834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.837864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.838221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.838250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.838497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.838528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.838885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.838913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.839275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.839305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.839686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.839713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.840079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.840108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.840485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.840514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.840859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.840889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.841284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.841313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.841672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.841699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.841933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.841961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.842329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.842359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.842732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.842761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.843026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.843053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.843434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.843462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.843823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.843851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.844231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.844260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.844527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.844554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.844915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.844943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.845199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.845230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.845597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.845624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.845980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.846007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.846389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.846418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.846834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.846861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.847192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.847222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.847595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.847624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.896 [2024-11-28 13:10:34.847989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.896 [2024-11-28 13:10:34.848016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.896 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.848396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.848425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.848847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.849194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.849224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.849589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.849617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.849980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.850007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.850372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.850401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.850745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.850773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.851131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.851166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.851524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.851552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.851905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.851933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.852295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.852325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.852684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.852718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.853086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.853113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.853552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.853581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.853955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.853984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.854336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.854366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.854722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.854750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.855097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.855126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.855505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.855894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.855923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.856288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.856316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.856692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.856719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.857082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.857109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.857486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.857515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.857861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.857889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.858254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.858284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.858646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.858673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.859041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.859068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.859423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.859452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.859759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.859786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.860141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.860175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.860509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.860539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.860915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.860943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.861307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.861337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.861709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.861737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.862110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.862137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.862517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.862546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.862907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.862935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.863300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.863331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.863697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.863724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.864089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.864118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.864458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.864487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.864841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.864870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.865237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.865266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.865627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.865657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.865989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.866017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.866385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.866415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.866785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.866813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.867187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.867216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.867597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.867625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.867965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.867994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.897 [2024-11-28 13:10:34.868382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.897 [2024-11-28 13:10:34.868416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.897 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.868761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.868791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.869172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.869202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.869546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.869574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.869950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.869977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.870346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.870375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.870807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.870835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.871177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.871207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.871605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.871969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.871997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.872223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.872255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.872592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.872619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.873056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.873085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.873417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.873447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.873821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.873849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.874218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.874248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.874627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.874654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.875019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.875047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.875389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.875419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.875803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.875831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.876188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.876217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.876588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.876615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.876985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.877013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.877389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.877418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.877774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.877802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.878173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.878203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.878560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.878587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.878948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.878977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.879230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.879262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.879612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.880026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.880053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.880418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.880448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.880817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.880844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.881186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.881215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.881546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.881575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.881960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.881988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.882347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.882376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.882744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.882771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.883166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.883195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.883566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.883594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.883960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.883995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.884288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.884318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.884721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.884749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.885098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.885128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.885490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.885519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.885890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.885917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.886277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.886305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.886662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.886689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.887053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.887082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.887445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.887473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.887768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.887796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.888180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.888210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.888618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.888646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.888995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.889023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.889384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.889414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.898 [2024-11-28 13:10:34.889773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.898 [2024-11-28 13:10:34.889802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.898 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.890178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.890207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.890560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.890588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.890951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.890980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.891337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.891365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.891722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.891750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.892097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.892124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.892508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.892538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.892902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.892930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.893305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.893335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.893756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.893784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.894143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.894180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.894519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.894547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.894904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.894931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.895299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.895329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.895680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.895707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.896073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.896100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.896526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.896555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.896915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.896942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.897310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.897340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.897596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.897623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.897901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.897928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.898260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.898290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.898654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.898682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.899032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.899060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.899271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.899305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.899653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.899682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.900039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.900067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.900407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.900436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.900787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.900815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.901175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.901205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.901558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.901890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.901919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.902271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.902300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.902645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.902673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.903039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.903066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.903313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.903341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.903666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.903696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.904024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.904052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.904391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.904420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.904811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.904839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.905184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.905212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.905558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.905586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.905941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.905969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.906298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.906327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.906679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.906706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.907053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.899 [2024-11-28 13:10:34.907080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.899 qpair failed and we were unable to recover it. 00:40:04.899 [2024-11-28 13:10:34.907434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.907461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.907815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.907843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.908202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.908230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.908602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.908629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.908987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.909015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.909362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.909392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.909738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.909765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.910096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.910124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.910480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.910510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.910857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.910885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.911257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.911287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.911632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.911660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.912005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.912034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.912374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.912403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.912769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.913117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.913145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.913496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.913524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.913902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.914237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.914272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.914643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.914671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.915014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.915042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.915463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.915492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.915814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.915848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.916194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.916223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.916565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.916593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.916952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.916980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.917329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.917358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.917689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.917716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.918051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.918078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.918457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.918486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.918829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.918857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.919206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.919625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.919653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.919983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.920011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.920343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.920372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.920694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.920722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.920959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.920987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.921734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.921762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.922105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.922132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.922508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.922536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.922860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.922888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.923132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.923172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.923508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.923536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.923898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.923925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.924274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.924304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.924636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.924664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.925007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.925034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.925375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.925403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.925745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.925773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.926151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.926186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.926544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.926572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.926923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.926950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.927124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.927155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.927521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.927549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.927895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.927922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.928283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.928312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.928653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.928680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.929034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.929069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.929472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.929501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.900 qpair failed and we were unable to recover it. 00:40:04.900 [2024-11-28 13:10:34.929744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.900 [2024-11-28 13:10:34.929770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.930003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.930031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.930345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.930372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.930718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.930745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.931102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.931130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.931493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.931522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.931869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.931897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.932247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.932277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.932604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.932632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.932861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.932892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.933221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.933250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.933619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.933992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.934020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.934386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.934414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.934785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.935129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.935167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.935526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.935555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.935890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.935917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.936307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.936336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.936730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.936758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.937100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.937127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.937523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.937552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.937890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.937918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.938151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.938191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.938530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.938558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.938907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.938953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.939318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.939348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.939695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.939722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.940079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.940106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.940436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.940466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.940812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.940839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.941189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.941219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.941499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.941527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.941863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.941891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.942246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.942275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.942635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.942662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.943034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.943062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.943416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.943445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.943779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.943806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.944150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.944186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.944540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.944568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.944918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.944946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.945300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.945330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.945672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.945699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.945936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.945967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.946309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.946337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.946687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.946714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.947060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.947087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.947422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.947450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.947799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.947827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.948181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.948211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.948585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.948613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.948957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.948985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.949326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.949355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.949730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.949758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.950098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.950125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.950380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.950409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.950748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.950776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.951174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.951203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.951563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.951590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.951843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.951869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.952212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.952240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.952626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.952654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.952903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.952933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.953277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.953307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.953632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.901 [2024-11-28 13:10:34.953666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.901 qpair failed and we were unable to recover it. 00:40:04.901 [2024-11-28 13:10:34.954019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.954047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.954390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.954420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.954837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.954865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.955214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.955242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.955589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.955617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.955855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.955886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.956227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.956256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.956528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.956555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.956879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.956907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.957154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.957194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.957509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.957537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.957882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.957909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.958246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.958276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.958515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.958543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.958902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.958930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.959181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.959210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.959553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.959580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.959932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.959960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.960339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.960368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.960705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.960732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.961070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.961098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.961439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.961467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.961816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.961845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.962191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.962222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.962557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.962584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.962940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.962967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.963268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.963296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.963651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.963679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.963997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.964033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.964374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.964402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.964779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.964807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.965182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.965563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.965589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.965951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.965980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.966333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.966362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.966832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.966860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.967232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.967261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.967590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.967618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.967854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.967882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.968204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.968239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.968599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.968626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.968966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.968994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.969321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.969350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.969660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.902 [2024-11-28 13:10:34.969688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.902 qpair failed and we were unable to recover it. 00:40:04.902 [2024-11-28 13:10:34.969973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.970000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.970357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.970387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.970726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.970753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.971113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.971140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.971507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.971535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.971887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.971915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.972258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.972287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.972639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.972667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.972915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.972942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.973272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.973302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.973653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.973680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.974100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.974128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.974501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.974530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.974882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.974910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.975276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.975305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.975646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.975673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.976080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.976107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.976445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.976475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.976825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.976853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.977064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.977095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.977443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.977472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.977876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.977905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.978257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.978286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.978666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.978694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.979012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.979040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.979403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.979433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.979659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.979687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.979838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.979864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.980203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.980231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.980561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.980589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.980938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.980966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.981321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.981350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.981593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.981623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.981983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.982011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.982341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.982370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.982730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.982763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.983108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.983135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.983505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.983533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.983886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.983913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.984257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.984287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.984682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.984710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.985080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.985109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.985352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.985380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.985704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.985732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.986148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.986184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.986564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.986592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.986953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.986981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.987340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.987368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.987744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.987772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.988115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.988143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.988496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.988524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.988876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.988904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.989284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.989314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.989703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.989730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.990084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.990111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.990499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.990528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.990877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.903 [2024-11-28 13:10:34.990904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.903 qpair failed and we were unable to recover it. 00:40:04.903 [2024-11-28 13:10:34.991238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.991266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.991580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.991608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.991856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.991882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.992222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.992251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.992612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.992641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.992904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.992931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.993287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.993317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.993677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.993705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.994071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.994098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.994494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.994522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.994861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.994888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.995229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.995258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.995605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.995642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.995871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.995898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.996245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.996274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.996615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.996642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.996982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.997009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.997362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.997391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.997730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.997764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.998083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.998111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.998293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.998322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.998705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.998732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.999061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.999088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.999345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.999374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:34.999721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:34.999748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.000079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.000108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.000478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.000507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.000769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.000796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.001132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.001168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.001483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.001510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.001737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.001764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.002120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.002147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.002390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.002418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.002676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.002703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.003035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.003063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.003404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.003433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.003772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.003800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.004175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.004205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:04.904 [2024-11-28 13:10:35.004555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.904 [2024-11-28 13:10:35.004583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:04.904 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.004822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.171 [2024-11-28 13:10:35.004850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.171 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.005186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.171 [2024-11-28 13:10:35.005216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.171 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.005427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.171 [2024-11-28 13:10:35.005453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.171 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.005790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.171 [2024-11-28 13:10:35.005818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.171 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.006169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.171 [2024-11-28 13:10:35.006200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.171 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.006522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.171 [2024-11-28 13:10:35.006550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.171 qpair failed and we were unable to recover it. 00:40:05.171 [2024-11-28 13:10:35.006778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.006806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.007120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.007149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.007497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.007525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.007871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.007899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.008251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.008280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.008622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.008649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.008982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.009009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.009380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.009409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.009770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.009797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.010149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.010198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.010439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.010466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.010842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.010869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.011211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.011239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.011491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.011530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.011886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.012266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.012295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.012620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.012648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.012904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.012932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.013171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.013202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.013610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.013951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.013978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.014334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.014363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.014696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.014724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.015142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.015177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.015534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.015561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.015917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.015945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.016298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.016328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.016671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.016699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.017059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.017086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.017317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.017345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.017709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.017737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.017959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.017986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.018354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.018383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.018739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.018767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.019102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.019129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.019506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.019535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.019792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.019819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.020034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.172 [2024-11-28 13:10:35.020061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.172 qpair failed and we were unable to recover it. 00:40:05.172 [2024-11-28 13:10:35.020409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.020438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.020798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.020826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.021187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.021217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.021539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.021566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.021823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.021854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.022204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.022233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.022464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.022492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.022849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.022877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.023209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.023238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.023580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.023608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.023952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.023979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.024328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.024356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.024691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.024719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.025061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.025088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.025463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.025492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.025825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.025858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.026245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.026274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.026612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.026640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.026870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.027255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.027285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.027655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.027682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.028063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.028090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.028451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.028480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.028645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.028677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.029030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.029059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.029402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.029432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.029786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.029814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.030063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.030091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.030435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.030464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.030839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.030868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.031221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.031250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.031597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.031625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.031950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.031978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.032338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.032367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.032638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.032668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.033003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.033031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.033280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.033308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.173 [2024-11-28 13:10:35.033536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.173 [2024-11-28 13:10:35.033564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.173 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.033925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.033952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.034297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.034326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.034680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.034707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.034936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.034967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.035368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.035398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.035620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.035651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.035989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.036017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.036357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.036387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.036719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.036747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.037091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.037119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.037552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.037581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.037834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.037861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.038191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.038220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.038554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.038581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.038929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.038956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.039285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.039314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.039660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.039687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.040035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.040068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.040482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.040511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.040923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.040951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.041147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.041187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.041550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.041578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.041934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.041961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.042301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.042330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.042681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.042708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.043040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.043068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.043459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.043487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.043812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.043840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.044177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.044207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.044533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.044561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.044852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.044881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.045224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.045253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.045603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.046237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.046265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.046593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.046621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.046958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.046985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.047307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.174 [2024-11-28 13:10:35.047335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.174 qpair failed and we were unable to recover it. 00:40:05.174 [2024-11-28 13:10:35.047678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.047705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.048040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.048067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.048449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.048479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.048799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.048827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.049180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.049225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.049579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.049607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.049968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.049997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.050350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.050379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.050711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.050739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.051066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.051094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.051456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.051485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.051805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.051833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.052194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.052224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.052562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.052590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.052931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.052958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.053301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.053330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.053676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.054063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.054091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.054446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.054475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.054882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.054915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.055300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.055329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.055678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.055705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.056052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.056080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.056308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.056338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.056686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.056714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.057065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.057092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.057446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.057476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.057822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.057848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.058225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.058253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.058594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.058621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.059006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.059372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.059401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.059742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.059770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.060123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.060151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.060506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.060535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.060909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.061136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.061174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.175 [2024-11-28 13:10:35.061496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.175 [2024-11-28 13:10:35.061524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.175 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.061865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.061893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.062219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.062251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.062609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.062636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.062984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.063011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.063263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.063291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.063655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.063683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.063969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.064004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.064363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.064392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.064711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.064739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.065131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.065167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.065527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.065555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.065882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.065910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.066343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.066372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.066729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.066756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.067101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.067128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.067489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.067518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.067860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.067888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.068233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.068262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.068612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.068640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.068883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.068910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.069269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.069299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.069641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.069675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.069952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.069979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.070303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.070333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.070673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.070700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.071065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.071092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.071448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.071477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.071812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.071840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.072183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.072211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.072621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.072649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.072973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.073000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.073372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.073402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.176 [2024-11-28 13:10:35.073740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.176 [2024-11-28 13:10:35.073767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.176 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.073999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.074030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.074497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.074527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.074867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.074894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.075241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.075270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.075605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.075633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.076010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.076037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.076373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.076402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.076753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.076783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.077202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.077231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.077573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.077600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.077952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.077980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.078244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.078272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.078598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.078626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.078966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.078994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.079331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.079360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.079678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.079707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.080051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.080079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.080411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.080440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.080788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.080815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.081182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.081211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.081515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.081543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.081885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.081913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.082278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.082308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.082683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.083067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.083095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.083444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.083474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.083802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.083830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.084188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.084216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.084602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.084640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.084977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.085005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.085360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.085388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.085734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.085762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.086177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.086206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.086513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.086541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.086891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.086919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.087264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.087293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.087648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.087674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.177 [2024-11-28 13:10:35.088055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.177 [2024-11-28 13:10:35.088083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.177 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.088340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.088367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.088701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.088728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.089078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.089107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.089455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.089485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.089826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.089855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.090193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.090222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.090455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.090482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.090853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.090881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.091239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.091268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.091620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.091648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.091974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.092002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.092377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.092406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.092751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.092778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.093126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.093155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.093513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.093541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.093876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.093903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.094146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.094185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.094494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.094523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.094879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.094907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.095252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.095280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.095618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.095646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.095992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.096019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.096377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.096404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.096770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.096797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.097141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.097521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.097549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.097883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.097910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.098153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.098191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.098523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.098550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.098916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.098943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.099278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.099314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.099636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.099664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.100020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.100047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.100378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.100407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.100755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.100782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.101125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.178 [2024-11-28 13:10:35.101153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.178 qpair failed and we were unable to recover it. 00:40:05.178 [2024-11-28 13:10:35.101512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.101540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.101876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.101904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.102248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.102278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.102629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.102656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.103003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.103030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.103376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.103406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.103748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.103775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.104020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.104049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.104392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.104421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.104771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.104798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.105037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.105063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.105415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.105444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.105769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.105797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.106149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.106186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.106515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.106543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.106899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.106926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.107175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.107203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.107542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.107570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.107884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.107911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.108268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.108297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.108653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.108680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.109036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.109065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.109337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.109366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.109698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.109725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.110076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.110104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.110464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.110494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.110830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.110857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.111205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.111234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.111594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.111622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.111934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.111961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.112388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.112417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.112733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.112762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.113142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.113490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.113518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.113761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.113798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.114170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.114199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.114537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.114565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.114916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.114944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.115292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.115321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.115658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.115686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.115988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.179 [2024-11-28 13:10:35.116016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.179 qpair failed and we were unable to recover it. 00:40:05.179 [2024-11-28 13:10:35.116274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.116303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.116641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.116669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.117021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.117049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.117398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.117428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.117785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.117812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.118236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.118265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.118591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.118618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.118964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.118992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.119345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.119374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.119719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.119746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.120080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.120107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.120465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.120494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.120846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.120874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.121273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.121303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.121647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.121674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.122017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.122045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.122385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.122413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.122758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.122785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.123134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.123169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.123495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.123523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.123873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.123902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.124256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.124284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.124635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.124662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.125097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.125123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.125485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.125845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.125873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.126109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.126139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.126497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.126526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.126869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.126896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.127233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.127269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.127639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.127987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.128014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.128376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.128405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.180 [2024-11-28 13:10:35.128833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.180 [2024-11-28 13:10:35.128861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.180 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.129212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.129241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.129590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.129617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.129958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.129985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.130326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.130354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.130711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.130739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.131082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.131109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.131452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.131481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.131834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.131862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.132195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.132230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.132579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.132606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.132950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.132978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.133330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.133359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.133695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.133723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.134067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.134095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.134470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.134498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.134841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.134869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.135216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.135245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.135603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.135631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.135980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.136008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.136408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.136438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.136772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.136799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.137151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.137186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.137520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.137550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.137891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.137919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.138256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.138285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.138633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.138661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.138986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.139019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.139327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.139356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.139693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.139721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.140067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.140094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.140340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.140742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.141088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.141115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.141485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.141514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.141872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.141900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.142244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.181 [2024-11-28 13:10:35.142273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.181 qpair failed and we were unable to recover it. 00:40:05.181 [2024-11-28 13:10:35.142619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.142645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.142876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.143227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.143263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.143652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.143679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.143998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.144371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.144400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.144745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.145102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.145135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.145490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.145518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.145860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.145887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.146226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.146254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.146620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.146647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.146990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.147017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.147374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.147402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.147766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.147794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.148138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.148173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.148521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.148548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.148908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.148936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.149315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.149652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.149680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.150013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.150040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.150370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.150399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.150743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.150771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.151126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.151154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.151495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.151523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.151853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.151881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.152230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.152259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.152604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.152632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.153003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.153030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.153377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.153406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.153764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.153797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.154010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.154041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.154396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.154425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.154660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.154687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.155020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.155047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.155310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.182 [2024-11-28 13:10:35.155338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.182 qpair failed and we were unable to recover it. 00:40:05.182 [2024-11-28 13:10:35.155664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.155692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.156039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.156066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.156413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.156441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.156791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.156819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.157194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.157224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.157581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.157609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.157958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.157985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.158365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.158394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.158739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.158768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.159007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.159034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.159368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.159397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.159771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.160100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.160127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.160517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.160545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.160882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.160910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.161266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.161627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.161654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.162021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.162049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.162382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.162412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.162770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.162797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.163154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.163189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.163537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.163567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.163921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.163948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.164300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.164329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.164683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.164710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.165043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.165070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.165410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.165439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.165790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.165818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.166237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.166267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.166606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.166634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.166972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.167000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.167373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.167401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.167744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.167772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.168103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.168132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.168473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.168506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.168845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.183 [2024-11-28 13:10:35.168873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.183 qpair failed and we were unable to recover it. 00:40:05.183 [2024-11-28 13:10:35.169226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.169254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.169629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.169657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.170002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.170029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.170382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.170412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.170761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.170789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.171217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.171245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.171556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.171585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.171924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.171952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.172256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.172284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.172583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.172610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.172956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.172983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.173325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.173353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.173704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.173732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.174078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.174106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.174436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.174465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.174818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.175193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.175222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.175586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.175613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.175952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.175979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.176268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.176296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.176624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.176652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.177032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.177059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.177313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.177341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.177748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.177776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.178115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.178142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.178532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.178561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.178984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.179013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.179324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.179353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.179685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.179712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.180097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.180452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.180480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.180824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.180851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.181190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.181219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.181573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.181811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.181842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.182195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.182224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.184 [2024-11-28 13:10:35.182573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.184 [2024-11-28 13:10:35.182606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.184 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.182914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.182941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.183202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.183245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.183590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.183861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.183888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.184222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.184250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.184593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.184621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.184992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.185019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.185425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.185453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.185788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.185815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.186144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.186178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.186525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.186552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.186839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.186867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.187182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.187212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.187543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.187571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.187798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.187829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.188240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.188270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.188497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.188527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.188932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.188960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.189273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.189302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.189659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.189686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.190031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.190057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.190359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.190388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.190647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.190674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.190999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.191027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.191369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.191399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.191749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.191776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.192138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.192173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.192514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.192541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.192886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.192915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.185 [2024-11-28 13:10:35.193262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.185 [2024-11-28 13:10:35.193290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.185 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.193648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.193676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.194031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.194059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.194396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.194424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.194760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.194788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.195139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.195176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.195510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.195536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.195875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.195903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.196324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.196353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.196685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.196713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.197060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.197086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.197431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.197460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.197806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.197839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.198176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.198211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.198539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.198566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.198815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.198842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.199186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.199215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.199572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.199599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.199925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.199952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.200289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.200319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.200659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.200686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.201020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.201047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.201480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.201509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.201860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.201888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.202254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.202283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.202632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.202660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.202961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.202989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.203258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.203287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.203614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.203646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.204001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.204030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.204410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.204440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.204808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.204835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.205234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.205263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.205627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.205654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.205991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.206019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.206265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.206297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.186 [2024-11-28 13:10:35.206627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.186 [2024-11-28 13:10:35.206655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.186 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.206884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.206911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.207228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.207256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.207609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.207637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.207980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.208008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.208402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.208735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.208763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.209104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.209131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.209504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.209532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.209874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.209901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.210257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.210285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.210624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.210652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.210992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.211019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.211371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.211399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.211746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.211774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.212138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.212189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.212530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.212564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.212960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.212988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.213239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.213268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.213670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.213698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.214022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.214050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.214370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.214399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.214679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.214707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.215035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.215062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.215421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.215449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.215785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.215813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.216150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.216185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.216524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.216883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.216910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.217252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.217281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.217629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.217658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.218013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.218041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.218422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.218451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.218778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.218805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.219169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.219198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.187 [2024-11-28 13:10:35.219552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.187 [2024-11-28 13:10:35.219581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.187 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.219931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.219959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.220296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.220324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.220678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.220706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.221051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.221079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.221317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.221346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.221591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.221623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.222008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.222035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.222419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.222448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.222775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.222804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.223134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.223169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.223514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.223542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.223872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.223899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.224244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.224273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.224593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.224622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.224946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.224973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.225330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.225360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.225751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.225779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.226121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.226149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.226479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.226507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.226729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.226759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.227142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.227186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.227515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.227543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.227894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.227922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.228261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.228289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.228663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.228691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.228939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.228966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.229301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.229330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.229681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.229709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.230059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.230085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.230462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.230491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.230849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.230877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.231226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.231254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.231596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.231624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.231980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.232008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.188 [2024-11-28 13:10:35.232252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.188 [2024-11-28 13:10:35.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.188 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.232640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.232668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.233003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.233031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.233387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.233417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.233747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.233776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.234116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.234144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.234482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.234511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.234731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.234759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.235108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.235136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.235495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.235764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.235792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.236023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.236051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.236391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.236421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.236759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.236789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.237127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.237156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.237451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.237479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.237709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.237738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.238082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.238111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.238451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.238481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.238825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.238854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.239253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.239283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.239629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.239656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.240003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.240037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.240376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.240405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.240665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.240692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.241032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.241060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.241385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.241421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.241766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.242092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.242119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.242480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.242508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.242748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.242775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.243167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.243196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.243649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.243989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.244016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.244352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.244381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.244666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.244694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.244905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.244932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.189 qpair failed and we were unable to recover it. 00:40:05.189 [2024-11-28 13:10:35.245337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.189 [2024-11-28 13:10:35.245367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.245782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.245810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.246132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.246169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.246535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.246563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.246786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.246813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.247169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.247198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.247553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.247580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.247831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.247858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.248196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.248226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.248570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.248598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.248834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.248865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.249208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.249237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.249483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.249511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.249867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.249894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.250238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.250267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.250633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.250661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.250997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.251025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.251355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.251384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.251719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.251747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.252095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.252123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.252372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.252400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.252733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.252760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.253104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.253132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.253497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.253531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.253879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.253907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.254252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.254282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.254625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.254653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.254899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.254926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.255343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.255599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.255631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.255974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.256002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.190 [2024-11-28 13:10:35.256338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.190 [2024-11-28 13:10:35.256366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.190 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.256573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.256600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.256963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.256990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.257370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.257399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.257629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.257657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.257975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.258339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.258369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.258717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.258746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.258984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.259011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.259344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.259373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.259724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.259752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.260108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.260135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.260372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.260400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.260718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.260746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.261148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.261184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.261455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.261482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.261813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.261841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.262216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.262244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.262615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.262966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.262995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.263318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.263699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.263727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.264073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.264101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.264450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.264478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.264794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.264822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.265181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.265211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.265523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.265550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.265880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.265908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.266250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.266280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.266663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.266690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.267026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.267055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.267386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.267415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.267750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.267778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.268029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.268057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.268389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.268418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.268764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.191 [2024-11-28 13:10:35.268792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.191 qpair failed and we were unable to recover it. 00:40:05.191 [2024-11-28 13:10:35.269000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.269027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.269385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.269413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.269658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.269691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.270048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.270077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.270480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.270509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.270815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.270843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.271193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.271221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.271446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.271473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.271741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.271768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.272107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.272134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.272484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.272512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.272865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.272892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.273267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.273295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.273650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.273678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.274017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.274044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.274361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.274391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.274748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.274777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.275038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.275065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.275348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.275377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.275737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.275764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.276009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.276040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.276262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.276291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.276642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.276670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.277001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.277028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.277401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.277429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.277808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.277835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.278182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.278212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.278544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.278571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.279015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.279044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.279277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.279310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.279623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.279651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.280010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.280037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.280384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.280414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.280798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.280826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.281181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.281210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.281542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.281570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.281928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.281955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.192 qpair failed and we were unable to recover it. 00:40:05.192 [2024-11-28 13:10:35.282300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.192 [2024-11-28 13:10:35.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.282439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.282473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.282789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.282817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.283166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.283195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.283529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.283557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.283908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.283942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.284312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.284340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.284584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.284610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.284998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.285025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.285214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.285242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.285596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.285623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.285946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.285973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.286324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.286353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.286464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.286494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.286849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.286877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.287211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.287239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.287605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.287633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.287958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.287985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.288423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.288451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.288859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.288887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.289206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.289242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.289609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.289636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.193 [2024-11-28 13:10:35.289980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.193 [2024-11-28 13:10:35.290008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.193 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.290379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.290410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.290749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.290777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.291106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.291133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.291471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.291499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.291844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.291871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.292216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.292244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.292583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.292611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.292955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.292983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.293363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.293392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.293726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.293754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.294111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.294139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.294518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.294547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.294893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.294920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.295270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.295299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.295672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.295700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.296056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.296084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.296468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.296497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.296868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.296895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.297258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.297288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.297609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.297637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.297996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.298023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.469 [2024-11-28 13:10:35.298409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.469 [2024-11-28 13:10:35.298438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.469 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.298721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.298754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.299084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.299113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.299435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.299465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.299719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.299746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.299962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.299993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.300324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.300353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.300705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.300733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.301073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.301100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.301540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.301569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.301899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.301927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.302272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.302301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.302660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.302688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.303032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.303058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.303411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.303440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.303783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.303811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.304152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.304191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.304535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.304562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.304905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.304932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.305186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.305216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.305573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.305602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.305945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.305972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.306317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.306676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.306704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.307051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.307437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.307465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.307822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.307850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.308209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.308238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.308612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.308640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.308996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.309023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.309372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.309400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.309745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.309772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.310100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.310127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.310481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.310510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.310825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.310853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.311215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.311244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.311594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.311621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.311957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.311985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.470 [2024-11-28 13:10:35.312365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.470 [2024-11-28 13:10:35.312393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.470 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.312746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.312774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.313087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.313116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.313389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.313419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.313738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.313767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.314088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.314115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.314361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.314389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.314720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.314748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.315099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.315126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.315486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.315515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.315864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.315891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.316254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.316283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.316626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.316653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.317045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.317324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.317353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.317610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.317636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.317965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.317993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.318235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.318268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.318603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.318630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.318987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.319015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.319372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.319402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.319762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.319789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.320136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.320172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.320490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.320517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.320860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.320888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.321237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.321265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.321635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.321662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.322007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.322034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.322377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.322407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.322739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.322766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.323108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.323142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.323371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.323400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.323750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.323778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.324126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.324154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.324387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.324419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.324721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.324749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.325074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.325102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.325429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.325459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.471 [2024-11-28 13:10:35.325703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.471 [2024-11-28 13:10:35.325734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.471 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.326091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.326119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.326479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.326508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.326855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.326883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.327301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.327331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.327667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.327694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.328124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.328153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.328496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.328525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.328876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.328903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.329249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.329277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.329641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.329668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.330030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.330057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.330434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.330782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.330809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.331156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.331196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.331471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.331498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.331735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.331762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.331993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.332021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.332412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.332442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.332782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.332810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.333204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.333233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.333583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.333961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.333987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.334328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.334356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.334714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.334742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.335105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.335447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.335476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.335716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.335748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.335962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.335990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.336341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.336370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.336725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.336753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.337108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.337136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.337559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.337594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.337921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.337949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.338305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.338334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.338674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.338702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.339048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.339076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.339409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.339438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.472 [2024-11-28 13:10:35.339684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.472 [2024-11-28 13:10:35.339711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.472 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.340056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.340083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.340431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.340460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.340689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.340716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.341054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.341082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.341289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.341317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.341676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.341704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.342041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.342069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.342424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.342453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.342836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.342864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.343207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.343236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.343578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.343605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.343955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.343982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.344317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.344346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.344691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.344719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.345064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.345091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.345339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.345370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.345668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.345695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.345916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.345944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.346352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.346381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.346704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.346732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.347085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.347112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.347465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.347494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.347731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.347758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.348003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.348034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.348390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.348419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.348755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.348783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.349124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.349151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.349493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.349937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.349966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.350199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.350230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.350566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.350594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.350939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.350968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.473 [2024-11-28 13:10:35.351322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.473 [2024-11-28 13:10:35.351351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.473 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.351693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.351728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.352070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.352098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.352336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.352368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.352602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.352630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.352945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.352972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.353297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.353326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.353683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.353710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.354070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.354097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.354440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.354468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.354667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.354693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.355038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.355066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.355418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.355447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.355783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.355810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.356173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.356202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.356603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.356632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.356962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.356990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.357343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.357373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.357693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.357720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.358068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.358096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.358466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.358495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.358831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.358858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.359104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.359132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.359444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.359472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.359716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.359743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.360078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.360106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.360508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.360537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.360939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.360967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.361326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.361356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.361704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.361731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.362080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.362107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.362348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.362380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.362611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.362638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.362984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.363011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.363348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.363376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.363754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.363782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.364135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.364168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.364602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.474 [2024-11-28 13:10:35.364630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.474 qpair failed and we were unable to recover it. 00:40:05.474 [2024-11-28 13:10:35.364970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.364998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.365409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.365438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.365768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.365796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.366151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.366200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.366620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.366648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.366997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.367025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.367368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.367397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.367740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.367768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.368107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.368135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.368492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.368520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.368860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.368888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.369224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.369254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.369578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.369605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.369946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.369974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.370242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.370270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.370682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.370710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.371050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.371076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.371457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.371486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.371812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.371840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.372185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.372214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.372555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.372582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.372922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.372948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.373291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.373320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.373664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.373691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.374045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.374072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.374409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.374438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.374715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.375056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.375083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.375323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.375351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.375664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.375691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.376040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.376067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.376413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.376441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.376788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.376818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.377150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.377187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.377427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.377456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.377865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.377894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.378237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.378267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.378608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.378636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.475 qpair failed and we were unable to recover it. 00:40:05.475 [2024-11-28 13:10:35.378839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.475 [2024-11-28 13:10:35.378865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.379197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.379228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.379609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.379637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.379868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.379896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.380224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.380252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.380600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.380634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.380974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.381002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.381324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.381353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.381705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.381732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.382072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.382100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.382452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.382482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.382839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.382867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.383211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.383239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.383597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.383625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.383860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.383887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.384227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.384256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.384579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.384607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.384958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.384985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.385321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.385349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.385710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.385738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.386082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.386108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.386474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.386503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.386846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.386874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.387206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.387236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.387598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.387624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.388032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.388061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.388386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.388415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.388748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.388774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.389192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.389220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.389550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.389577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.389912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.389940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.390283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.390312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.390673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.390701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.391027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.391055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.391402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.391432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.391774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.391800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.392175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.392203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.392545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.392574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.476 qpair failed and we were unable to recover it. 00:40:05.476 [2024-11-28 13:10:35.392923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.476 [2024-11-28 13:10:35.392950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.393210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.393240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.393579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.393607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.394036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.394063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.394398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.394427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.394770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.394797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.395128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.395156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.395488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.395522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.395879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.395909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.396244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.396273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.396633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.396661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.396993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.397020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.397987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.398031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.398391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.398423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.398633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.398661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.399033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.399065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.399402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.399431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.399653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.399681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.400042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.400070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.400317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.400346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.400569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.400597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.400940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.400969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.401320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.401349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.401692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.401720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.402035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.402063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.402406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.402437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.402774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.402802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.403174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.403203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.403624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.403877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.403905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.404282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.404312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.404659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.404686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.405039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.405067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.405441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.405470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.405815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.405844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.406192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.406220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.406538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.406566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.406804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.406831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.477 [2024-11-28 13:10:35.407185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.477 [2024-11-28 13:10:35.407214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.477 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.407549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.407576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.407917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.407944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.408201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.408231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.408666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.408693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.409030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.409057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.409393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.409421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.409795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.409823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.410069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.410097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.410472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.410861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.411096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.411123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.411450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.411479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.411821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.411848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.412187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.412217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.412418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.412445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.412808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.412836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.413179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.413207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.413565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.413592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.413845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.413877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.414199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.414229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.414482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.414513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.414855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.414884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.415234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.415263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.415612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.415640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.416005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.416033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.416372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.416401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.416755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.416783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.417123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.417151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.417566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.417594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.417946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.417973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.418299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.418327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.418649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.418677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.419018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.419046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.419388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.419417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.478 qpair failed and we were unable to recover it. 00:40:05.478 [2024-11-28 13:10:35.419742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.478 [2024-11-28 13:10:35.419769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.420117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.420145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.420503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.420532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.420898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.420925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.421269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.421299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.421650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.421678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.422010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.422038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.422378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.422406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.422757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.422785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.423148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.423191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.423539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.423567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.423828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.423854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.424080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.424107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.424360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.424388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.424719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.424752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.425078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.425107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.425434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.425464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.425726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.425753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.426070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.426098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.426432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.426462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.426814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.426841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.427189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.427219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.427560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.427587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.427817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.427844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.428079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.428108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.428355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.428384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.428718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.428745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.429086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.429113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.429431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.429460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.429782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.429809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.430152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.430190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.430344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.430372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.430593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.430621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.431019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.431048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.431417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.431759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.431787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.432127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.432155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.432514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.432542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.479 [2024-11-28 13:10:35.432890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.479 [2024-11-28 13:10:35.432917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.479 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.433269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.433298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.433642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.433669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.433993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.434021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.434377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.434407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.434770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.434800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.435178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.435208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.435545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.435573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.435844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.435871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.436212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.436241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.436587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.436614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.436962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.436990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.437325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.437353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.437702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.437730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.438149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.438184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.438423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.438450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.438841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.438875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.439223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.439253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.439576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.439603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.439949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.439977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.440331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.440360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.440713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.440740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.440947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.440978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.441328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.441358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.441711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.441738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.442068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.442095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.442466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.442495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.442724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.442751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.443077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.443105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.443459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.443489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.443833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.443862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.444205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.444234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.444584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.444612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.444961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.445289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.445319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.445628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.445655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.445806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.445833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.446170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.446198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.446513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.480 [2024-11-28 13:10:35.446541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.480 qpair failed and we were unable to recover it. 00:40:05.480 [2024-11-28 13:10:35.446909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.446937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.447351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.447380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.447710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.447738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.448147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.448184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.448531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.448559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.448877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.448904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.449253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.449282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.449627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.449654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.450007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.450035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.450366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.450394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.450629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.450656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.451008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.451035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.451290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.451317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.451660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.451687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.452035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.452063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.452453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.452482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.452839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.452868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.453229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.453645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.453673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.454024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.454052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.454401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.454431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.454665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.454693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.455038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.455066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.455418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.455447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.455798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.455826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.456059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.456087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.456422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.456451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.456789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.456817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.457248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.457278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.457616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.457644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.458012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.458414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.458443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.458787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.458815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.459096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.459124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.459474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.459503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.459857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.459885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.460205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.460235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.460578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.481 [2024-11-28 13:10:35.460606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.481 qpair failed and we were unable to recover it. 00:40:05.481 [2024-11-28 13:10:35.460933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.460961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.461314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.461343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.461679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.462060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.462089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.462445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.462478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.462710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.462738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.463077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.463106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.463478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.463508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.463858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.463886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.464229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.464258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.464667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.464695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.465051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.465081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.465421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.465452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.465803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.465831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.466180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.466209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.466565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.466900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.466927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.467168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.467197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.467522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.467549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.467908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.467941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.468184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.468214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.468529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.468557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.468906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.468933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.469268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.469298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.469674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.469701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.470039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.470067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.470417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.470446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.470826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.471064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.471092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.471432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.471461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.471805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.471833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.472183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.472210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.472641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.472669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.473013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.473041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.473387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.473417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.473712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.482 [2024-11-28 13:10:35.473739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.482 qpair failed and we were unable to recover it. 00:40:05.482 [2024-11-28 13:10:35.474064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.474092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.474252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.474280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.474665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.474692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.475049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.475076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.475425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.475826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.475854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.476180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.476208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.476564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.476591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.476909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.476936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.477299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.477328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.477669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.477698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.477940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.477967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.478320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.478348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.478707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.478734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.479083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.479110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.479481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.479510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.479839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.479867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.480099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.480126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.480473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.480502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.480899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.481137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.481172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.481510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.481539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.481880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.481907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.482309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.482344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.482771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.482799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.483133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.483179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.483450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.483478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.483828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.484177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.484206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.484541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.484569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.484819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.484849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.485090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.485118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.485475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.485504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.485748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.485775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.486017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.486045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.486401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.486431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.483 [2024-11-28 13:10:35.486780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.483 [2024-11-28 13:10:35.486807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.483 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.487153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.487188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.487442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.487470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.487815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.487843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.488090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.488117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.488408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.488437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.488787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.488815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.489024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.489054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.489376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.489405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.489741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.489769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.490127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.490155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.490503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.490531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.490862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.490890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.491264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.491590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.491620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.491959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.491987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.492346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.492375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.492731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.492759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.493174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.493203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.493553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.493580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.493668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.493696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.494022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.494049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.494385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.494413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.494779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.494807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.495103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.495130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.495502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.495530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.495766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.495793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.496098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.496126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.496501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.496530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.496867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.496895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.497236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.497265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.497632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.497660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.498003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.498032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.498375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.498405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.498808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.498836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.499183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.499214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.499563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.499591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.499913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.499941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.500068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.484 [2024-11-28 13:10:35.500099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.484 qpair failed and we were unable to recover it. 00:40:05.484 [2024-11-28 13:10:35.500454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.500483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.500708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.500735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.500965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.500994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.501228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.501258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.501632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.501659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.502017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.502045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.502427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.502456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.502787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.502815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.503146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.503182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.503557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.503585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.503813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.503841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.504045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.504072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.504400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.504429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.504807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.504835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.505202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.505231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.505619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.505653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.506002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.506030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.506382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.506411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.506774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.506802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.507017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.507044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.507398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.507427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.507659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.507686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.507967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.507995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.508326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.508354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.508712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.509090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.509118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.509454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.509483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.509788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.509815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.510149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.510187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.510556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.510585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.510813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.510840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.511201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.511231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.511601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.511629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.512024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.512051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.512396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.512424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.512756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.512783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.513127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.513154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.513491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.485 [2024-11-28 13:10:35.513519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.485 qpair failed and we were unable to recover it. 00:40:05.485 [2024-11-28 13:10:35.513844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.513871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.514218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.514246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.514589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.514991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.515019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.515370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.515400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.515792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.515819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.516018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.516045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.516409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.516438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.516802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.516829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.517192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.517220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.517557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.517585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.517829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.517856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.518083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.518111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.518458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.518489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.518827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.518854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.519194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.519226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.519566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.519595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.519851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.519884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.520218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.520248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.520583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.520611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.520960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.520987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.521326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.521674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.521708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.522059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.522087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.522341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.522369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.522703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.522731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.523077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.523105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.523443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.523472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.523706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.523735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.524088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.524116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.524449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.524478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.524821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.524849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.525203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.525233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.525591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.525618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.525850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.526136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.526171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.526525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.526553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.526870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.486 [2024-11-28 13:10:35.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.486 qpair failed and we were unable to recover it. 00:40:05.486 [2024-11-28 13:10:35.527260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.527289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.527631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.527660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.527998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.528027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.528379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.528408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.528730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.528758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.529092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.529120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.529481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.529512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.529858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.529886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.530251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.530281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.530615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.530643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.530986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.531013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.531351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.531380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.531708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.531736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.532155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.532192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.532541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.532569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.532990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.533018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.533254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.533284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.533620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.533648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.534013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.534041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.534382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.534419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.534763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.534790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.535144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.535179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.535523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.535551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.535792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.535823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.536137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.536174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.536520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.536548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.536907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.536935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.537290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.537318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.537686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.537714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.538032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.538060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.538305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.538335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.538672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.538700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.539048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.539075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.539462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.539492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.487 [2024-11-28 13:10:35.539833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.487 [2024-11-28 13:10:35.539860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.487 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.540205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.540234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.540583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.540611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.540966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.540993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.541339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.541368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.541819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.541848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.542186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.542216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.542577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.542605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.542942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.542969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.543307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.543336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.543757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.543784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.544000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.544028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.544396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.544425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.544772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.544800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.545150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.545199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.545576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.545605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.545950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.545978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.546295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.546326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.546678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.546706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.546946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.546974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.547330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.547359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.547746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.547774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.548111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.548139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.548480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.548510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.548913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.548941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.549289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.549329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.549696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.550067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.550094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.550439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.550469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.550809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.550835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.551194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.551223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.551596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.551623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.551990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.552018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.552370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.552398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.552748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.552775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.553100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.553128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.553512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.553541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.553786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.553816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.554048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.488 [2024-11-28 13:10:35.554076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.488 qpair failed and we were unable to recover it. 00:40:05.488 [2024-11-28 13:10:35.554408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.554437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.554778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.554805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.555167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.555196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.555542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.555570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.555920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.555948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.556277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.556307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.556653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.556681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.557019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.557047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.557396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.557425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.557756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.557784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.558477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.558505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.558895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.558923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.559260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.559290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.559634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.559662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.559902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.559930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.560245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.560274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.560592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.560621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.560978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.561006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.561441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.561471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.561789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.561818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.562193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.562222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.562564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.562592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.562938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.562965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.563359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.563389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.563698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.563726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.564037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.564072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.564399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.564428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.564773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.564801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.565136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.565171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.565513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.565541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.565881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.565909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.566237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.566267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.566624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.566653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.567006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.567034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.567384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.567414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.567763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.567791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.568134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.489 [2024-11-28 13:10:35.568182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.489 qpair failed and we were unable to recover it. 00:40:05.489 [2024-11-28 13:10:35.568586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.568614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.568954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.568984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.569344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.569374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.569693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.569721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.569965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.569997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.570353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.570381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.570717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.570745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.571078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.571106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.571454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.571483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.571838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.571865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.572227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.572257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.572619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.572647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.572994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.573020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.573370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.573400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.573734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.573763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.574107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.574135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.574490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.574519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.574864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.574892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.575266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.575603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.575630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.575980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.576007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.576342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.576371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.576721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.576748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.577136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.577507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.577536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.577883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.577910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.578264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.578293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.578647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.578676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.579095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.579128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.490 [2024-11-28 13:10:35.579479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.490 [2024-11-28 13:10:35.579507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.490 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.579854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.579883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.580227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.580257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.580613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.580640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.580982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.581010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.581370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.581398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.581741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.581768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.582023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.582051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.582433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.582461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.582834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.582861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.583185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.583215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.583549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.583577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.583923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.583950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.768 [2024-11-28 13:10:35.584304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.768 [2024-11-28 13:10:35.584333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.768 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.584684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.584712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.585058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.585087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.585422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.585450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.585796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.585824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.586083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.586111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.586447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.586476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.586773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.586801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.587138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.587174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.587492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.587519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.587864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.587892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.588232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.588262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.588621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.588649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.588809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.588840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.589186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.589215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.589588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.589616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.589851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.589877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.590207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.590235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.590608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.590636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.591002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.591029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.591347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.591376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.591605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.591636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.591977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.592005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.592358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.592387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.592712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.592740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.593054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.593082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.593450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.593485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.593825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.593853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.594194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.594223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.594579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.594606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.769 [2024-11-28 13:10:35.594963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.769 [2024-11-28 13:10:35.594991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.769 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.595330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.595358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.595724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.596013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.596041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.596448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.596478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.596827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.597188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.597217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.597541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.597570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.597891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.597919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.598168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.598198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.598595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.598623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.598948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.598976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.599343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.599372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.599723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.599751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.600084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.600115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.600573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.600602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.600915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.600944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.601301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.601330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.601669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.601698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.602039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.602074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.602396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.602426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.602757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.602784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.603138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.603522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.603551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.603910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.603937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.604279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.604308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.604646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.604675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.604987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.605015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.770 qpair failed and we were unable to recover it. 00:40:05.770 [2024-11-28 13:10:35.605363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.770 [2024-11-28 13:10:35.605393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.605776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.605803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.606029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.606056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.606391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.606420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.606755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.606782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.607125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.607153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.607509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.607537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.607895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.607923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.608269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.608303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.608662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.608690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.609027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.609053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.609428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.609803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.609831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.610247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.610276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.610611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.610638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.610956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.610984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.611213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.611241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.611587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.611615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.611962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.611989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.612348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.612376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.612612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.612643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.612977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.613005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.613360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.613390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.613735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.613762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.614096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.614123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.614376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.614404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.614771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.614799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.615130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.615165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.615481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.615509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.771 [2024-11-28 13:10:35.615850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.771 [2024-11-28 13:10:35.615878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.771 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.616223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.616251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.616602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.616629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.616979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.617006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.617349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.617378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.617710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.617738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.618121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.618488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.618525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.618876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.618905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.619251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.619280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.619526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.619553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.619897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.619925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.620279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.620308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.620649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.620676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.621023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.621051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.621397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.621426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.621802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.621829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.622147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.622185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.622509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.622537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.622879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.622918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.623156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.623192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.623536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.623563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.623887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.623914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.624299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.624328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.624658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.624686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.625026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.625053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.625309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.625338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.625611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.625638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.625977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.626005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.772 qpair failed and we were unable to recover it. 00:40:05.772 [2024-11-28 13:10:35.626310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.772 [2024-11-28 13:10:35.626339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.626692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.626719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.627074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.627101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.627447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.627476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.627809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.627838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.628182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.628211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.628553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.628580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.628936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.628964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.629321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.629349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.629613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.629640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.630001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.630029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.630369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.630397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.630735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.631114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.631142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.631503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.631531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.631766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.631797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.632167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.632196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.632520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.632550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.632918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.632946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.633301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.633331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.633675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.633702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.634026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.634054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.634378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.634407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.634755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.634781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.635145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.635518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.635546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.635878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.635906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.636244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.636273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.636611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.636639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.773 [2024-11-28 13:10:35.636994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.773 [2024-11-28 13:10:35.637022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.773 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.637368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.637404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.637752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.637779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.638114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.638142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.638472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.638501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.638847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.638874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.639204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.639233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.639559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.639587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.639936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.639964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.640303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.640332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.640698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.640725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.641069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.641097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.641503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.641533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.641847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.641874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.642227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.642255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.642597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.642626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.643041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.643069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.643407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.643436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.643785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.643811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.644151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.644186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.644524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.644551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.644800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.774 [2024-11-28 13:10:35.644827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.774 qpair failed and we were unable to recover it. 00:40:05.774 [2024-11-28 13:10:35.645151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.645190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.645530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.645558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.645953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.645982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.646327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.646356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.646702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.646728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.647096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.647123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.647572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.647601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.647917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.647945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.648295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.648324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.648673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.648701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.648945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.648972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.649308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.649337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.649656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.649684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.650032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.650060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.650386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.650415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.650654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.650685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.650998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.651027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.651370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.651398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.651741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.651769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.652113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.652146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.652510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.652538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.652855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.652883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.653206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.653235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.653661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.653688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.654002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.654030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.654395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.654424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.654786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.654813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.655054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.655081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.775 [2024-11-28 13:10:35.655421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.775 [2024-11-28 13:10:35.655450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.775 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.655802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.655830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.656219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.656248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.656579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.656607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.656948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.656975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.657225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.657254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.657651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.657678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.657999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.658027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.658383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.658412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.658777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.659145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.659181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.659461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.659488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.659893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.660231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.660260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.660605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.660632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.660967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.660994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.661325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.661354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.661698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.661726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.662064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.662092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.662437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.662466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.662714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.662744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.663120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.663148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.663385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.663417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.663660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.663687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.663911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.663939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.664284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.664313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.664658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.664686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.665021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.665049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.665397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.665425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.665757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.776 [2024-11-28 13:10:35.665785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.776 qpair failed and we were unable to recover it. 00:40:05.776 [2024-11-28 13:10:35.666121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.666157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.666510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.666543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.666879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.666907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.667254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.667283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.667618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.667645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.668003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.668030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.668273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.668626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.668653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.668974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.669001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.669337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.669365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.669779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.669806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.670112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.670139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.670490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.670518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.670866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.670894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.671231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.671261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.671585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.671613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.671976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.672005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.672323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.672352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.672602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.672629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.672990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.673017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.673378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.673409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.673729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.673758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.674076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.674103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.674448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.674477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.674857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.674884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.675203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.675233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.675620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.675647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.675981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.676009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.676357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.676387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.676732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.777 [2024-11-28 13:10:35.676759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.777 qpair failed and we were unable to recover it. 00:40:05.777 [2024-11-28 13:10:35.677108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.677134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.677487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.677515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.677862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.677890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.678242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.678271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.678521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.678548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.678903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.678930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.679331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.679361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.679677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.679704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.680028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.680055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.680393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.680422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.680774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.681150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.681185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.681524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.681552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.681913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.681941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.682168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.682199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.682551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.682579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.682934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.682961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.683300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.683330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.683697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.683725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.684071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.684099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.684454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.684483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.684831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.684858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.685210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.685239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.685578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.685606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.685955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.685983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.686350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.686379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.686719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.686747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.686981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.687011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.687366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.687396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.687747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.687775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.778 [2024-11-28 13:10:35.688118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.778 [2024-11-28 13:10:35.688145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.778 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.688501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.688530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.688877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.688906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.689250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.689278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.689668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.689697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.690029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.690057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.690388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.690417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.690770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.690798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.691149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.691207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.691506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.691535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.691883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.691910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.692257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.692707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.692736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.693059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.693087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.693412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.693441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.693785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.693813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.694166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.694195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.694541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.694569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.694858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.694886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.695245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.695274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.695628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.695656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.696002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.696038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.696369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.696400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.696741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.696768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.697116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.697143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.697495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.697523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.697868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.697895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.698228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.698257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.698583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.698611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.698957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.698984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.699341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.699370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.779 qpair failed and we were unable to recover it. 00:40:05.779 [2024-11-28 13:10:35.699722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.779 [2024-11-28 13:10:35.699750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.700094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.700121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.700477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.700845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.700874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.701215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.701245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.701588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.701616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.701997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.702025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.702369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.702745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.702773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.703131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.703177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.703527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.703557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.703937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.703965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.704210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.704242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.704627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.704656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.705022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.705052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.705462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.705491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.705834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.705863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.706203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.706239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.706594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.706622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.706965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.706992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.707360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.707389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.707746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.707774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.708123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.708150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.708582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.708610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.708847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.708875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.709203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.709233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.780 qpair failed and we were unable to recover it. 00:40:05.780 [2024-11-28 13:10:35.709627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.780 [2024-11-28 13:10:35.709655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.709989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.710016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.710389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.710418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.710671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.710698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.711055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.711083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.711415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.711444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.711796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.711824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.712148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.712185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.712545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.712573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.712917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.712944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.713285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.713315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.713672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.713699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.714022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.714050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.714372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.714401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.714739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.714766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.715123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.715150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.715501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.715530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.715886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.715914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.716271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.716301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.716653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.716681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.717031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.717058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.717439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.717469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.717787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.717823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.718128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.718156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.718498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.718526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.718920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.718948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.719301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.719331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.719765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.719793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.720131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.720168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.720541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.720569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.781 qpair failed and we were unable to recover it. 00:40:05.781 [2024-11-28 13:10:35.720800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.781 [2024-11-28 13:10:35.720828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.721177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.721212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.721563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.721590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.721907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.721935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.722249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.722277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.722649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.722676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.723044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.723071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.723417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.723445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.723798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.723827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.724194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.724224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.724561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.724590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.724867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.724897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.725211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.725242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.725565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.725594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.725963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.725991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.726335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.726365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.726744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.726772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.727005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.727032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.727380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.727408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.727756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.727783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.728204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.728569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.728596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.728931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.728958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.729275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.729304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.729693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.729722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.730074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.730102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.730332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.730360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.730739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.730767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.731136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.731174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.782 [2024-11-28 13:10:35.731538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.782 qpair failed and we were unable to recover it. 00:40:05.782 [2024-11-28 13:10:35.731904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.731931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.732190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.732219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.732465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.732494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.732817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.732845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.733186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.733216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.733579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.733608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.733952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.733979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.734329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.734357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.734669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.734696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.735077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.735104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.735401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.735430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.735656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.735689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.735919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.735947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.736275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.736306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.736688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.736716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.737064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.737093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.737460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.737490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.737811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.737839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.738206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.738236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.738558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.738586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.738948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.738976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.739320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.739349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.739759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.740111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.740139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.740509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.740538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.740893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.740922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.741251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.741281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.741673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.741701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.742054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.783 [2024-11-28 13:10:35.742082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.783 qpair failed and we were unable to recover it. 00:40:05.783 [2024-11-28 13:10:35.742311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.742340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.742705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.742734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.743085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.743113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.743519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.743548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.743787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.743815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.744144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.744179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.744511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.744539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.744899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.744927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.745272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.745302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.745661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.745689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.746039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.746067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.746187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.746219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.746487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.746515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.746844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.746874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.747214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.747243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.747724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.747752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.747980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.748007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.748362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.748391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.748757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.748785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.749131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.749169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.749519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.749548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.749895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.749923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.750277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.750313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.750561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.750591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.750954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.750982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.751333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.751362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.751574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.751601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.752034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.752061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.784 qpair failed and we were unable to recover it. 00:40:05.784 [2024-11-28 13:10:35.752443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.784 [2024-11-28 13:10:35.752472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.752694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.752722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.753088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.753116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.753523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.753821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.753847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.754062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.754090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.754423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.754453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.754673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.754700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.755041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.755070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.755393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.755422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.755795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.755824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.756052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.756431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.756461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.756812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.756840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.757081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.757112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.757463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.757493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.757794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.757821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.758061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.758088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.758468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.758498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.758854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.758881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.759225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.759254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.759571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.759600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.759962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.759989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.760343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.760373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.760706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.760733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.761114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.761479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.761509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.761881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.761908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.762240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.762269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.762586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.762614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.762971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.762999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.763341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.763371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.763762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.763790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.785 [2024-11-28 13:10:35.764128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.785 [2024-11-28 13:10:35.764155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.785 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.764529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.764564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.764764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.764791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.765134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.765170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.765515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.765853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.765889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.766226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.766255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.766592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.766620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.766990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.767018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.767359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.767388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.767724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.767751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.768110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.768137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.768506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.768534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.768883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.768910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.769267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.769297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.769649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.770036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.770063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.770400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.770430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.770774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.770802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.771037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.771064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.771437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.771465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.771815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.771844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.772198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.772229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.786 qpair failed and we were unable to recover it. 00:40:05.786 [2024-11-28 13:10:35.772585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.786 [2024-11-28 13:10:35.772612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.772947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.772974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.773309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.773339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.773683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.773711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.774052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.774080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.774448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.774477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.774826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.774854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.775183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.775212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.775559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.775587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.775932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.775961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.776308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.776337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.776691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.776719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.777056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.777084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.777415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.777445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.777800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.777828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.778260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.778290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.778633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.778662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.778910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.778938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.779303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.779337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.779684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.779713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.779948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.779975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.780322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.780351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.780668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.780734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.781041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.781069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.781453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.781483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.781820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.781848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.782178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.782208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.782550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.782577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.782950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.783298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.783327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.783729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.787 [2024-11-28 13:10:35.783757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.787 qpair failed and we were unable to recover it. 00:40:05.787 [2024-11-28 13:10:35.784080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.784107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.784485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.784514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.784840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.784867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.785118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.785145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.785501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.785530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.785875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.785903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.786278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.786307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.786664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.786693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.786980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.787007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.787333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.787362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.787704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.787732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.788086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.788113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.788509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.788539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.788866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.788894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.789245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.789274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.789587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.789614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.789858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.789885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.790228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.790257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.790616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.790643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.790973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.791001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.791249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.791281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.791606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.791634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.791980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.792007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.792390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.792420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.792752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.792779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.793127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.793155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.793523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.793551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.793892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.793925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.794269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.788 [2024-11-28 13:10:35.794298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.788 qpair failed and we were unable to recover it. 00:40:05.788 [2024-11-28 13:10:35.794652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.794680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.795039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.795067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.795421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.795796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.795824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.796070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.796096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.796390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.796419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.796810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.796838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.797171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.797200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.797544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.797572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.797964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.797991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.798333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.798362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.798711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.798739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.799092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.799120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.799386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.799414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.799752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.799781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.800107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.800135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.800483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.800511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.800852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.800880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.801227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.801256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.801594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.801621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.801965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.801992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.802338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.802367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.802790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.802817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.803174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.803203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.803567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.803594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.803908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.803936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.804105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.804132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.804471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.804500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.804860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.804887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.805179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.805207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.789 [2024-11-28 13:10:35.805538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.789 [2024-11-28 13:10:35.805566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.789 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.805915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.805942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.806298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.806331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.806673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.806701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.807041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.807069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.807476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.807505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.807852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.807880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.808220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.808599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.808634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.808983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.809011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.809357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.809385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.809723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.809751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.810093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.810120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.810433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.810462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.810821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.810849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.811203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.811232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.811575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.811602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.811944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.811972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.812304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.812333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.812670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.812698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.813077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.813429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.813457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.813703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.813731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.814170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.814199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.814547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.814574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.814916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.814944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.815299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.815329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.815683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.815710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.816051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.816079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.816417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.790 [2024-11-28 13:10:35.816447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.790 qpair failed and we were unable to recover it. 00:40:05.790 [2024-11-28 13:10:35.816790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.816818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.817169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.817198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.817545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.817572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.817915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.817943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.818291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.818319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.818692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.818720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.818971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.818998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.819415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.819445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.819785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.819813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.820179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.820208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.820541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.820568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.820917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.820944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.821289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.821318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.821660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.821689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.822028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.822056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.822402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.822430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.822792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.823175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.823205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.823547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.823580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.823898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.823925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.824265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.824296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.824647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.824991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.825018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.825381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.825416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.825725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.825753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.826102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.826130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.826490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.826520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.791 qpair failed and we were unable to recover it. 00:40:05.791 [2024-11-28 13:10:35.826875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.791 [2024-11-28 13:10:35.826903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.827237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.827266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.827609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.827636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.827984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.828012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.828370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.828399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.828725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.828753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.829096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.829124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.829488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.829517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.829871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.829899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.830243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.830272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.830583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.830611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.830941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.830968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.831317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.831346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.831690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.832084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.832111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.832436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.832466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.832809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.832837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.833197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.833228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.833615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.833644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.833986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.834013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.834411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.834440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.834666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.834694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.834909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.834940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.835302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.835331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.835672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.835700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.836070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.836425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.836453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.836817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.836845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.837187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.792 [2024-11-28 13:10:35.837216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.792 qpair failed and we were unable to recover it. 00:40:05.792 [2024-11-28 13:10:35.837438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.837465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.837733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.837761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.838102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.838136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.838520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.838548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.838879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.838906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.839259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.839287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.839535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.839566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.839906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.839933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.840280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.840309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.840595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.840622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.840951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.840978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.841318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.841349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.841723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.841750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.842091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.842118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.842514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.842544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.842873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.842901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.843156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.843192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.843556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.843584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.843939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.843966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.844303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.844332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.844668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.844696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.845040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.845068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.845417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.845445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.845794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.845821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.846192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.846222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.846555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.846583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.846938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.846966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.847310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.847339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.847673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.847701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.793 [2024-11-28 13:10:35.848059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.793 [2024-11-28 13:10:35.848087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.793 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.848411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.848440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.848773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.848800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.849127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.849154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.849510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.849538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.849957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.849985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.850202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.850232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.850596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.850624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.850962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.850990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.851410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.851439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.851758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.851787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.852154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.852570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.852598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.852821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.852854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.853194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.853223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.853525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.853552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.853887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.853915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.854268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.854298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.854612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.854641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.854990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.855018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.855351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.855381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.855716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.855745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.856084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.856112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.856450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.856479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.856838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.856866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.857223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.857252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.857473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.857500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.857854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.857882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.794 [2024-11-28 13:10:35.858248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.794 [2024-11-28 13:10:35.858277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.794 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.858622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.858650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.858993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.859020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.859378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.859407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.859763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.859790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.860124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.860152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.860482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.860511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.860861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.860888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.861233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.861261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.861611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.861638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.861983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.862012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.862330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.862721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.862750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.863101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.863128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.863368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.863400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.863648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.863675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.864046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.864073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.864431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.864460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.864783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.864811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.865145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.865181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.865521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.865549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.865798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.865826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.866190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.866220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.866474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.866502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.866843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.866871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.867210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.867240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.867598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.867626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.867970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.867997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.868365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.868393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.795 qpair failed and we were unable to recover it. 00:40:05.795 [2024-11-28 13:10:35.868750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.795 [2024-11-28 13:10:35.868777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.869134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.869168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.869581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.869609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.869920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.869949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.870180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.870208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.870556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.870584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.870931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.870961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.871286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.871315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.871679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.871707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.871962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.871989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.873640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.873693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.873976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.874007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.874362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.874402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.874753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.874799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.875052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.875101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.875494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.875543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.875912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.875956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.876227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.796 [2024-11-28 13:10:35.876259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:05.796 qpair failed and we were unable to recover it. 00:40:05.796 [2024-11-28 13:10:35.876616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.876644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.877023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.877051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.877312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.877341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.877675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.877702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.878038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.878065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.878408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.878444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.878691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.878722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.879044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.879073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.879395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.879425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.879789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.879817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.880195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.880224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.880567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.880594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.880947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.880974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.075 [2024-11-28 13:10:35.881301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.075 [2024-11-28 13:10:35.881329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.075 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.881659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.881687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.881945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.881972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.882323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.882353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.882702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.882730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.883110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.883144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.883505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.883535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.883836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.883865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.884239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.884268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.884588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.884616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.884942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.884970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.885299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.885328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.885684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.885712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.885946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.885974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.886301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.886329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.886641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.886669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.887015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.887043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.887386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.887415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.887656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.887683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.888011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.888039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.888372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.888401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.888751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.888778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.889097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.889123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.889505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.889534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.889853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.889882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.890197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.890225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.890594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.890622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.890960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.890987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.891229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.891257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.076 qpair failed and we were unable to recover it. 00:40:06.076 [2024-11-28 13:10:35.891487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.076 [2024-11-28 13:10:35.891515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.891838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.891865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.892226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.892272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.892596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.892631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.892970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.892998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.893256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.893285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.893642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.893671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.893896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.893924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.894206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.894235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.894592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.894619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.894902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.894930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.895259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.895287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.895654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.895681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.895974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.896001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.896390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.896419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.896763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.896791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.897111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.897138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.897498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.897527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.897882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.897909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.898209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.898238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.898483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.898511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.898854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.898881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.899228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.899256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.899646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.900001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.900030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.900261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.900290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.900625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.900653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.901064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.901092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.077 [2024-11-28 13:10:35.901437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.077 [2024-11-28 13:10:35.901466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.077 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.901694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.901722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.902059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.902087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.902433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.902462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.902786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.902814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.903176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.903206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.903546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.903573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.903919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.903946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.904283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.904313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.904662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.904689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.905029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.905057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.905431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.905460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.905808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.905836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.906206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.906235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.906610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.906637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.906983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.907016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.907363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.907392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.907657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.907685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.908036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.908064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.908414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.908443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.908781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.908809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.909166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.909195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.909536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.909565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.909909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.909937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.910314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.910343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.910755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.910782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.911109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.911136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.911509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.911538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.911885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.911913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.078 qpair failed and we were unable to recover it. 00:40:06.078 [2024-11-28 13:10:35.912264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.078 [2024-11-28 13:10:35.912294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.912650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.912678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.913030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.913395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.913424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.913784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.913811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.914173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.914202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.914512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.914540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.914883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.914911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.915258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.915287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.915593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.915620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.915968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.915996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.916211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.916243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.916568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.916596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.916955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.916984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.917319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.917347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.917692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.917720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.918137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.918179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.918519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.918548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.918904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.918932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.919278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.919306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.919650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.919678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.920036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.920064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.920459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.920488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.920831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.920860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.921211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.921240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.921591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.921947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.921980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.922310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.922340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.922681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.079 [2024-11-28 13:10:35.922708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.079 qpair failed and we were unable to recover it. 00:40:06.079 [2024-11-28 13:10:35.923061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.923088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.923437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.923466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.923808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.923836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.924187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.924216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.924542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.924569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.924886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.924913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.925187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.925219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.925617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.925645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.925984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.926012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.926356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.926385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.926735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.926763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.927128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.927156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.927506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.927535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.927868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.927896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.928240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.928271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.928588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.928616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.928959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.928987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.929344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.929372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.929723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.929752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.930097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.930125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.930518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.930547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.930881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.930908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.931256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.931284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.931603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.931631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.931977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.932005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.932345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.932375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.932724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.933069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.933097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.080 [2024-11-28 13:10:35.933456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.080 [2024-11-28 13:10:35.933485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.080 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.933814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.933842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.934186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.934216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.934541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.934568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.934906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.934934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.935281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.935310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.935657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.935684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.935943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.935970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.936288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.936318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.936670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.936704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.937093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.937121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.937598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.937628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.937961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.937988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.938231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.938262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.938611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.938639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.938999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.939027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.939374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.939403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.939751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.939778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.940124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.940152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.940496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.940523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.940877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.940905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.941251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.941281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.941703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.941731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.941967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.941995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.942371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.081 [2024-11-28 13:10:35.942400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.081 qpair failed and we were unable to recover it. 00:40:06.081 [2024-11-28 13:10:35.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.942783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.942993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.943024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.943345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.943374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.943716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.943744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.944089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.944117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.944468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.944497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.944857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.944884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.945203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.945232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.945582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.945611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.945937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.945966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.946256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.946285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.946639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.946667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.947010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.947038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.947405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.947434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.947782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.947810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.948152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.948189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.948490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.948518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.948876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.948904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.949248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.949276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.949615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.949644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.949925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.949953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.950295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.950323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.950662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.950689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.950944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.951284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.951319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.951642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.951670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.952125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.952153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.952532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.952559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.952915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.952943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.953270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.082 [2024-11-28 13:10:35.953300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.082 qpair failed and we were unable to recover it. 00:40:06.082 [2024-11-28 13:10:35.953685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.953712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.954131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.954167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.954494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.954523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.954884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.954912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.955248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.955277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.955635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.955663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.955990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.956017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.956267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.956298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.956656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.956685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.956925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.956953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.957311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.957341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.957713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.957741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.958088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.958116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.958476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.958828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.958856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.959198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.959227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.959537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.959565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.959815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.959847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.960099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.960127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.960501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.960530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.960753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.961110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.961138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.961487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.961517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.961869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.961897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.962340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.962370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.962718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.962745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.963096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.963124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.963466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.963496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.083 qpair failed and we were unable to recover it. 00:40:06.083 [2024-11-28 13:10:35.963850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.083 [2024-11-28 13:10:35.963878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.964125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.964152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.964387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.964416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.964791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.965136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.965172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.965549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.965577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.965924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.965958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.966281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.966311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.966671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.966699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.967027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.967054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.967377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.967405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.967717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.967745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.968072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.968100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.968433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.968462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.968808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.968836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.969175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.969204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.969454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.969485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.969736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.969764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.970090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.970117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.970493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.970841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.970870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.971210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.971240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.971471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.971498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.971845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.971873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.972107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.972135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.972477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.972507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.972843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.972871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.973196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.973225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.973554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.973582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.973921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.084 [2024-11-28 13:10:35.973948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.084 qpair failed and we were unable to recover it. 00:40:06.084 [2024-11-28 13:10:35.974193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.974222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.974551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.974579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.974902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.974930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.975274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.975305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.975662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.975690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.975925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.975952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.976294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.976323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.976664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.976692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.977044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.977072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.977430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.977459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.977807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.977836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.978087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.978114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.978468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.978497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.978841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.978869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.979217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.979246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.979612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.979640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.980047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.980081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.980410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.980439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.980683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.980714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.981070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.981098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.981465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.981494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.981842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.981870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.982100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.982128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.982404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.982433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.982788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.982816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.983198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.983228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.983602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.983630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.983988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.984016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.984276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.984305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.984642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.984670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.984932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.984961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.985173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.085 [2024-11-28 13:10:35.985219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.085 qpair failed and we were unable to recover it. 00:40:06.085 [2024-11-28 13:10:35.985572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.985600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.986037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.986065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.986400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.986429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.986781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.986809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.987147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.987185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.987516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.987544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.987813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.987840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.988183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.988212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.988569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.988597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.988947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.988977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.989228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.989257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.989492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.989523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.989870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.989898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.990249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.990277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.990519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.990547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.990935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.990962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.991207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.991237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.991629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.991657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.991899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.991927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.992322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.992351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.992653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.992682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.993031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.993059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.993289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.993321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.993675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.993703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.994021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.994055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.994391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.994420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.994756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.994784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.995134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.995170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.995511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.995899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.995927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.996291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.086 [2024-11-28 13:10:35.996320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.086 qpair failed and we were unable to recover it. 00:40:06.086 [2024-11-28 13:10:35.996747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.996774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.997096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.997123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.997468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.997498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.997817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.997844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.998053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.998427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.998456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.998765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.998793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.999142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.999190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.999529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.999557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:35.999907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:35.999934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.000273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.000303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.000652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.000680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.000975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.001002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.001342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.001370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.001760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.001788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.002141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.002177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.002521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.002549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.002894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.002922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.003275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.003304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.003660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.003687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.004035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.004063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.004452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.004482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.004830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.004858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.005204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.005232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.087 [2024-11-28 13:10:36.005677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.087 [2024-11-28 13:10:36.005705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.087 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.006115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.006142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.006502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.006531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.006883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.006911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.007257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.007287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.007608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.007636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.007968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.007996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.008327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.008356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.008689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.008717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.009079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.009112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.009479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.009508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.009850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.009877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.010218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.010246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.010567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.010595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.010937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.010964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.011294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.011323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.011682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.011709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.012050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.012078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.012429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.012459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.012717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.012745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.013068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.013097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.013431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.013460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.013782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.013810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.014170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.014199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.014541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.014569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.014917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.014944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.015292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.015320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.015651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.015678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.016021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.016049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.088 [2024-11-28 13:10:36.016395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.088 [2024-11-28 13:10:36.016425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.088 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.016753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.016781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.017153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.017198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.017552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.017581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.017842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.017870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.018195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.018224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.018572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.018599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.018949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.018978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.019333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.019362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.019709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.019737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.020084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.020112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.020466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.020495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.020847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.020874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.021222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.021252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.021597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.021974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.022002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.022367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.022397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.022735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.022762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.023173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.023203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.023546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.023574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.023924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.023957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.024300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.024328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.024671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.024699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.024930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.024962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.025308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.025338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.025695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.025723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.026068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.026096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.026440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.026468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.026821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.026849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.027192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.027221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.089 qpair failed and we were unable to recover it. 00:40:06.089 [2024-11-28 13:10:36.027587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.089 [2024-11-28 13:10:36.027615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.027966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.027994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.028351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.028381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.028770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.028798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.029187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.029515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.029543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.029887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.029914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.030259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.030290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.030683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.030710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.031047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.031074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.031424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.031453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.031790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.031818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.032175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.032204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.032549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.032577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.032934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.032961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.033307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.033335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.033649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.033677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.033970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.033999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.034361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.034392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.034776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.034804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.035135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.035179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.035511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.035539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.035803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.036168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.036197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.036530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.036557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.036897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.036925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.037176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.037208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.037587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.037615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.037958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.037985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.038322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.090 [2024-11-28 13:10:36.038351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.090 qpair failed and we were unable to recover it. 00:40:06.090 [2024-11-28 13:10:36.038694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.038728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.039075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.039103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.039450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.039810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.039837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.040183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.040211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.040532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.040560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.040917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.040945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.041284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.041312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.041581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.041609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.041936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.041964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.042302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.042331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.042757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.042785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.043133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.043168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.043514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.043542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.043826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.043855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.044195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.044224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.044539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.044567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.044925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.044953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.045193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.045225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.045590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.045618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.045953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.045980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.046325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.046354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.046595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.046623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.046949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.046978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.047308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.047338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.047697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.048069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.048097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.048454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.048484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.091 [2024-11-28 13:10:36.048833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.091 [2024-11-28 13:10:36.048860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.091 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.049156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.049192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.049552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.049580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.049923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.049952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.050284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.050313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.050455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.050485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.050827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.050856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.051204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.051233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.051611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.051977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.052005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.052347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.052376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.052733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.052760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.052999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.053032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.053341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.053370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.053727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.053756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.054176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.054205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.054551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.054579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.054934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.054962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.055309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.055338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.055683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.055710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.056056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.056441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.056470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.056821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.056849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.057206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.057235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.057614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.057642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.057984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.058011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.058375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.058405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.058752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.058780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.059101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.059478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.092 [2024-11-28 13:10:36.059508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.092 qpair failed and we were unable to recover it. 00:40:06.092 [2024-11-28 13:10:36.059851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.059879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.060255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.060284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.060535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.060563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.060814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.060842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.061189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.061218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.061533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.061560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.061910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.061938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.062289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.062318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.062662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.062689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.063030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.063064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.063293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.063322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.063561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.063588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.063932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.063960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.064320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.064349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.064706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.064734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.065080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.065109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.065447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.065476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.065859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.065886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.066118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.066145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.066397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.066428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.066816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.067154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.067191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.067463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.067491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.067856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.067884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.068245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.068275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.093 [2024-11-28 13:10:36.068637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.093 [2024-11-28 13:10:36.068664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.093 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.069009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.069037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.069328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.069356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.069699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.069728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.070086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.070114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.070463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.070492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.070825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.070852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.071198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.071226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.071590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.071618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.071961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.072327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.072356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.072711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.072739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.073080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.073108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.073464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.073493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.073838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.073866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.074125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.074152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.074507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.074535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.074891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.074919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.075260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.075289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.075629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.075656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.076006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.076034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.076396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.076425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.076759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.076787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.077148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.077183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.077521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.077554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.077895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.077923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.078208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.078530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.078557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.078900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.078928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.079277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.079306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.094 [2024-11-28 13:10:36.079661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.094 [2024-11-28 13:10:36.079689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.094 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.079991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.080018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.080369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.080398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.080733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.080761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.081118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.081146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.081469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.081497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.081841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.081868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.082217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.082245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.082598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.082626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.082976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.083004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.083346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.083375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.083726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.083753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.084093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.084121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.084458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.084487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.084832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.084859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.085203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.085232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.085591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.085619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.085878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.085905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.086249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.086277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.086612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.086641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.086827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.087208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.087238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.087586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.087614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.087966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.087993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.088413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.088442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.088759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.088788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.089127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.089155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.089510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.089539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.089883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.089910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.090238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.090267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.090603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.095 [2024-11-28 13:10:36.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.095 qpair failed and we were unable to recover it. 00:40:06.095 [2024-11-28 13:10:36.090982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.091009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.091326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.091355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.091717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.091745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.091984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.092022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.092247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.092280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.092651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.093007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.093034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.093372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.093401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.093742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.093771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.094123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.094150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.094543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.094571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.094917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.094944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.095189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.095219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.095562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.095592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.095849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.095877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.096250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.096279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.096624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.096652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.097018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.097046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.097379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.097408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.097739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.097767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.098128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.098157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.098552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.098580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.098931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.098959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.099311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.099340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.099680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.099708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.100058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.100086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.100444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.100472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.100828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.100856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.096 qpair failed and we were unable to recover it. 00:40:06.096 [2024-11-28 13:10:36.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.096 [2024-11-28 13:10:36.101314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.101650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.101677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.102025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.102054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.102420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.102449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.102803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.102830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.103195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.103226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.103487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.103515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.103854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.103882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.104216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.104245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.104504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.104533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.104835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.104863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.105208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.105236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.105611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.105639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.105982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.106011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.106431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.106460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.106864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.106898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.107211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.107241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.107600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.107628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.107869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.107896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.108231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.108259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.108622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.108649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.108998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.109269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.109301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.109619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.109647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.109994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.110021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.110367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.110397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.110616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.110647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.110971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.110999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.111353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.111384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.111703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.097 [2024-11-28 13:10:36.111732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.097 qpair failed and we were unable to recover it. 00:40:06.097 [2024-11-28 13:10:36.112079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.112107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.112433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.112462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.112777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.112805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.113142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.113178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.113491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.113519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.113849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.113877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.114235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.114264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.114626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.114653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.115001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.115028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.115370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.115400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.115758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.115785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.116132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.116167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.116502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.116530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.116873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.116901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.117280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.117629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.117657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.118003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.118030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.118381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.118410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.118647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.118674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.118910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.118938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.119273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.119303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.119643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.119670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.120015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.120042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.120299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.120328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.120671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.120699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.121046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.121080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.121436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.121705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.098 [2024-11-28 13:10:36.121733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.098 qpair failed and we were unable to recover it. 00:40:06.098 [2024-11-28 13:10:36.122045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.122072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.122425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.122454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.122806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.122833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.123252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.123280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.123591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.123618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.123963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.123990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.124355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.124384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.124707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.124734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.125083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.125111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.125448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.125477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.125857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.126112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.126140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.126456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.126486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.126744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.126775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.127097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.127125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.127495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.127524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.127782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.128132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.128169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.128443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.128470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.128814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.128842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.129192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.129222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.129588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.129616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.129944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.129971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.130318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.130346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.130698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.130727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.099 qpair failed and we were unable to recover it. 00:40:06.099 [2024-11-28 13:10:36.131069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.099 [2024-11-28 13:10:36.131098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.131441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.131471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.131793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.131821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.132188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.132218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.132599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.132627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.132972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.133000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.133360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.133390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.133722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.133750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.134091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.134119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.134456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.134486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.134727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.134755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.134998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.135025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.135374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.135410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.135763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.135791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.136131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.136166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.136529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.136869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.136897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.137248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.137277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.137619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.137647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.137989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.138017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.138376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.138404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.138675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.138703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.139118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.139146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.139471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.139499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.139844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.139872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.140218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.140247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.140502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.140531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.140853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.140881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.141229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.141258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.100 [2024-11-28 13:10:36.141609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.100 [2024-11-28 13:10:36.141637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.100 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.141991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.142019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.142361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.142390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.142659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.142687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.143020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.143048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.143387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.143415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.143752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.143780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.144124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.144152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.144447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.144475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.144719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.144746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.145003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.145034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.145383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.145413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.145755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.145783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.146154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.146499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.146527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.146769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.146796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.147113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.147141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.147509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.147868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.147895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.148235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.148264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.148445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.148477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.148815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.148842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.149195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.149225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.149545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.149580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.149833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.149861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.150205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.150234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.150578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.150605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.150960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.150988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.151354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.151383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.151634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.101 [2024-11-28 13:10:36.151661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.101 qpair failed and we were unable to recover it. 00:40:06.101 [2024-11-28 13:10:36.151996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.152023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.152346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.152374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.152725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.152753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.153098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.153127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.153488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.153517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.153862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.153890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.154239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.154269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.154617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.154647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.154985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.155013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.155365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.155394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.155746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.155773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.156119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.156147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.156492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.156520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.156859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.156887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.157236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.157265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.157636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.157663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.157995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.158023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.158356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.158385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.158737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.158765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.159024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.159052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.159372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.159402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.159727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.159754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.159996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.160027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.160367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.160397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.160750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.160778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.161132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.161168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.161556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.161584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.161925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.161952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.162299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.162328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.102 qpair failed and we were unable to recover it. 00:40:06.102 [2024-11-28 13:10:36.162672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.102 [2024-11-28 13:10:36.162700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.163047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.163074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.163440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.163469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.163836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.163865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.164225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.164266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.164609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.164638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.164989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.165018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.165348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.165377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.165730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.165758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.166103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.166131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.166477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.166505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.166897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.166925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.167264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.167293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.167627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.167656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.167997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.168025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.168370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.168399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.168742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.168771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.169121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.169149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.169560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.169589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.169927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.169955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.170308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.170337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.170681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.170708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.171057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.171085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.171455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.171484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.171827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.171856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.172125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.172153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.172512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.172540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.172929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.172957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.173277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.173306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.173553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.173585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.173936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.103 [2024-11-28 13:10:36.173964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.103 qpair failed and we were unable to recover it. 00:40:06.103 [2024-11-28 13:10:36.174304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.174333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.174684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.174712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.175106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.175133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.175455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.175485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.175840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.175868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.176214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.176244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.176586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.176613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.176927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.176956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.177189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.177218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.177576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.177604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.177956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.177984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.178327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.178356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.178711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.178739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.179055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.179088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.179342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.179371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.179713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.179741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.180082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.180110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.181848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.181896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.182254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.182286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.104 qpair failed and we were unable to recover it. 00:40:06.104 [2024-11-28 13:10:36.182558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.104 [2024-11-28 13:10:36.182589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.182930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.182959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.183203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.183232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.183456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.183488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.183834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.183862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.184284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.184313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.184655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.184683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.185036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.185064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.185403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.185432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.185759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.185787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.186141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.186180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.186501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.186529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.186886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.186914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.187290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.187320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.187652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.187679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.188001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.188028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.188373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.188402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.188767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.188794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.189046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.189078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.189395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.189424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.189775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.189803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.190151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.190188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.190433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.190460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.190806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.190834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.191193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.191223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.191473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.191504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.191829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.191857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.383 [2024-11-28 13:10:36.192199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.383 [2024-11-28 13:10:36.192228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.383 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.192653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.192681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.193016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.193043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.193429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.193459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.193800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.193828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.194181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.194211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.194614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.194641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.194960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.194994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.195346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.195376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.195591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.195623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.196033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.196060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.196398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.196427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.196751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.196778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.197130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.197189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.197520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.197549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.197900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.197928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.198278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.198307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.198624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.198652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.198996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.199023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.199255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.199284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.199639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.199667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.200031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.200059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.200418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.200447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.200591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.200621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.200958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.200986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.201312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.201341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.201728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.201755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.201979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.202009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.202371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.202399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.384 [2024-11-28 13:10:36.202747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.384 [2024-11-28 13:10:36.202775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.384 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.203065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.203092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.203451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.203480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.203823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.204239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.204268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.204598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.204626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.204978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.205006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.205333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.205362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.205695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.205723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.206022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.206050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.206372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.206400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.206749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.206776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.207120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.207148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.207381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.207413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.207651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.207679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.207994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.208021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.208343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.208372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.208700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.208728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.209074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.209109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.209459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.209488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.209716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.209743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.210085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.210112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.210417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.210446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.210787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.210814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.211175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.211203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.211486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.211513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.211855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.211884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.212257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.212286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.212616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.385 [2024-11-28 13:10:36.212644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.385 qpair failed and we were unable to recover it. 00:40:06.385 [2024-11-28 13:10:36.212974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.213001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.213343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.213373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.213728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.213755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.214103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.214131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.214829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.214856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.215213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.215241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.215502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.215528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.215853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.215880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.216233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.216262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.216606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.216633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.216880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.216907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.217313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.217341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.217694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.217721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.218066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.218093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.218340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.218369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.218724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.218752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.219084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.219112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.219448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.219477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.219697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.219724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.219953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.219981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.220349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.220378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.220642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.220985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.221012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.221320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.221349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.221664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.221692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.222082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.222109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.222438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.222467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.386 [2024-11-28 13:10:36.222814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.386 [2024-11-28 13:10:36.222842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.386 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.223180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.223215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.223530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.223559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.223709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.223739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.224105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.224133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.224482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.224511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.224875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.224903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.225251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.225281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.225619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.225646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.225969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.225996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.226331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.226361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.226706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.226734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.227082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.227111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.227453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.227482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.227748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.227775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.228119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.228147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.228502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.228859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.228886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.229232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.229261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.229625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.229653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.229997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.230432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.230461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.230782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.230810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.231176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.231204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.231615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.231643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.231983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.232011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.232357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.232386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.232746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.232774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.387 qpair failed and we were unable to recover it. 00:40:06.387 [2024-11-28 13:10:36.233120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.387 [2024-11-28 13:10:36.233148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.233452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.233480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.233721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.233748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.234079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.234107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.234460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.234490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.234830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.234858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.235216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.235246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.235568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.235595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.235949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.235977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.236322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.236351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.236692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.236720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.237056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.237084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.237445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.237474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.237825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.237859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.238210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.238240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.238541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.238913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.238943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.239183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.239562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.239591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.239857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.239885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.240214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.240605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.240632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.240979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.241006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.241355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.241384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.241768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.242110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.242137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.242493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.242523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.242877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.242905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.388 qpair failed and we were unable to recover it. 00:40:06.388 [2024-11-28 13:10:36.243259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.388 [2024-11-28 13:10:36.243288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.243646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.243673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.244024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.244051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.244421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.244767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.244795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.245147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.245185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.245537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.245564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.245905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.245933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.246277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.246307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.246637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.246665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.247014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.247042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.247457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.247486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.247725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.247759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.248013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.248045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.248377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.248660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.248688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.249053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.249082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.249448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.249478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.249817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.249844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.250126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.250153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.250522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.250550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.250899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.250926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.251275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.251304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.389 [2024-11-28 13:10:36.251650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.389 [2024-11-28 13:10:36.251679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.389 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.252021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.252049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.252406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.252435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.252798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.252826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.253179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.253209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.253553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.253936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.253964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.254197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.254227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.254569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.254596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.254942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.254970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.255317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.255346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.255636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.255663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.256008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.256036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.256413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.256759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.256787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.257133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.257175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.257396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.257428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.257740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.257768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.258112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.258140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.258493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.258521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.258935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.258964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.259298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.259326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.259737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.259764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.260106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.260133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.260497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.260527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.260934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.260962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.261187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.261219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.261588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.261616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.390 [2024-11-28 13:10:36.261961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.390 [2024-11-28 13:10:36.261989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.390 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.262334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.262370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.262711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.262738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.263085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.263113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.263493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.263848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.263875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.264223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.264252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.264591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.264619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.264955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.265397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.265426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.265748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.265775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.266132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.266169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.266416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.266444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.266779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.266806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.267062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.267433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.267464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.267815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.267843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.268188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.268217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.268551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.268578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.268840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.268867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.269251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.269279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.269674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.269703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.270023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.270055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.270383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.270412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.270670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.270697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.271028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.271056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.271398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.271427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.271760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.271787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.272151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.272187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.272514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.272542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.272890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.272918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.273170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.391 [2024-11-28 13:10:36.273200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.391 qpair failed and we were unable to recover it. 00:40:06.391 [2024-11-28 13:10:36.273525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.273553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.273798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.273826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.274147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.274184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.274570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.274944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.274973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.275345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.275374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.275721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.275749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.276102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.276130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.276479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.276508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.276849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.276883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.277215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.277245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.277518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.277545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.277780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.277807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.278144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.278181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.278534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.278561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.278902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.278930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.279275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.279304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.279648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.279676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.280022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.280050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.280394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.280423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.280769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.280797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.281147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.281184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.281514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.281541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.281861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.281889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.282241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.282271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.282589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.282616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.282960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.282988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.283345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.283374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.283706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.283734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.284089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.284117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.392 [2024-11-28 13:10:36.284367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.392 [2024-11-28 13:10:36.284399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.392 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.284748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.284776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.285120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.285147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.285486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.285513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.285762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.285791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.286116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.286143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.286494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.286523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.286874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.286902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.287249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.287279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.287662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.287690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.288037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.288065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.288433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.288462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.288782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.288809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.289157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.289193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.289529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.289556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.289901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.289930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.290283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.290312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.290643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.290670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.291021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.291048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.291290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.291327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.291666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.291694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.292065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.292093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.292440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.292470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.292864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.292891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.293239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.293268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.293618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.293646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.293986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.294014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.294363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.294392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.294748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.294777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.295124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.295152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.295530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.295560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.295889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.393 [2024-11-28 13:10:36.295918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.393 qpair failed and we were unable to recover it. 00:40:06.393 [2024-11-28 13:10:36.296253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.296283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.296641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.296669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.297018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.297046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.297392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.297421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.297682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.297709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.298076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.298104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.298436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.298466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.298811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.298839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.299078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.299109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.299441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.299470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.299816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.299843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.300242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.300273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.300497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.300525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.300859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.300887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.301265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.301629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.301658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.302013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.302041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.302381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.302410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.302756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.302783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.303130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.303164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.303499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.303527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.303873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.303901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.304248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.304277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.304644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.394 [2024-11-28 13:10:36.304673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.394 qpair failed and we were unable to recover it. 00:40:06.394 [2024-11-28 13:10:36.305018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.305047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.305390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.305420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.305768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.305796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.306151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.306194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.306512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.306539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.306879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.306907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.307256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.307285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.307536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.307563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.307903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.307932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.308282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.308311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.308646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.308673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.309032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.309060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.309413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.309442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.309787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.309814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.310246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.310275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.310619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.310647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.310995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.311023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.311387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.311417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.311836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.311865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.312200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.312229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.312607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.312634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.312988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.313017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.313362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.313391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.313745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.313773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.314118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.314146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.314501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.314530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.314882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.314910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.315343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.315374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.315713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.315741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.316087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.395 [2024-11-28 13:10:36.316114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.395 qpair failed and we were unable to recover it. 00:40:06.395 [2024-11-28 13:10:36.316359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.316392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.316805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.316832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.317178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.317208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.317552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.317580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.317937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.317965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.318318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.318347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.318690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.318718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.319065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.319092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.319437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.319467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.319802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.319830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.320173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.320203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.320589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.320617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.320941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.320969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.321319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.321354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.321699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.321727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.322068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.322096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.322439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.322469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.322764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.322792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.323107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.323135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.323475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.323504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.323741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.323770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.324099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.324127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.324532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.324561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.324813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.324840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.325198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.325229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.325557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.325585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.325940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.325967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.326316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.326346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.326702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.326730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.396 qpair failed and we were unable to recover it. 00:40:06.396 [2024-11-28 13:10:36.327085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.396 [2024-11-28 13:10:36.327112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.327439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.327468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.327805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.327833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.328078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.328109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.328478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.328508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.328857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.328885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.329234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.329263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.329633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.329661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.329908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.329936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.330275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.330305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.330649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.330677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.331042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.331070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.331412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.331441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.331792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.331821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.332176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.332205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.332552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.332580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.332929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.332959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.333277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.333662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.333690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.334057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.334085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.334438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.334469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.334793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.334820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.335178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.335207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.335548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.335577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.335926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.335961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.336291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.336320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.336646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.336673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.337023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.337050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.337380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.337409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.337750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.337778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.338113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.338141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.338519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.338548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.397 qpair failed and we were unable to recover it. 00:40:06.397 [2024-11-28 13:10:36.338889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.397 [2024-11-28 13:10:36.338916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.339261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.339290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.339634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.339662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.340024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.340052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.340402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.340430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.340774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.340802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.341148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.341203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.341523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.341550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.341895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.341923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.342264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.342294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.342625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.342653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.343011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.343038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.343379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.343407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.343767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.343795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.344142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.344179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.344537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.344565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.344898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.344925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.345274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.345304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.345633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.345660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.346023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.346051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.346393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.346423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.346770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.346797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.347144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.347189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.347519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.347547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.347892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.347920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.348273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.348303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.348631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.348659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.349010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.349038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.349382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.349411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.349799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.349826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.350181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.350211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.350614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.398 [2024-11-28 13:10:36.350642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.398 qpair failed and we were unable to recover it. 00:40:06.398 [2024-11-28 13:10:36.350975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.351009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.351425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.351454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.351771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.351799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.352150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.352199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.352514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.352542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.352890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.352919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.353276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.353305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.353634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.353662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.353908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.353936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.354258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.354287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.354621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.355002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.355377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.355406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.355751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.355779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.356153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.356192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.356465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.356493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.356903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.356931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.357273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.357302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.357631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.357659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.358013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.358041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.358377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.358406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.358614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.358642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.358975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.359003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.359336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.359364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.359718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.359745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.360105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.360133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.360490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.360519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.360908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.360936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.361259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.361289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.399 qpair failed and we were unable to recover it. 00:40:06.399 [2024-11-28 13:10:36.361636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.399 [2024-11-28 13:10:36.361665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.361989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.362018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.362379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.362409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.362755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.362783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.363127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.363154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.363504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.363532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.363887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.363915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.364271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.364300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.364612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.364640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.364992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.365021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.365339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.365368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.365688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.365721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.365978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.366005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.366324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.366354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.366704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.366732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.366943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.366974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.367323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.367352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.367748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.367776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.368178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.368207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.368548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.368575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.368921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.368949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.369309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.369338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.369691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.369719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.370068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.370096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.400 qpair failed and we were unable to recover it. 00:40:06.400 [2024-11-28 13:10:36.370444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.400 [2024-11-28 13:10:36.370474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.370828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.370857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.371249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.371620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.371648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.372003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.372031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.372417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.372446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.372804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.372831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.373177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.373206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.373549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.373577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.373925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.373952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.374207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.374237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.374593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.374621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.374977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.375005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.375339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.375368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.375713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.375741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.376133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.376172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.376549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.376786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.376814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.377141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.377179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.377548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.377575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.377977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.378005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.378364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.378394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.378723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.378750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.379102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.379129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.379474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.379503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.379847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.379875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.380223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.380252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.380502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.380535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.380885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.380913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.381270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.401 [2024-11-28 13:10:36.381299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.401 qpair failed and we were unable to recover it. 00:40:06.401 [2024-11-28 13:10:36.381649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.381676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.381967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.381994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.382437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.382466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.382803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.382831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.383183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.383213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.383645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.383673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.383896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.383927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.384317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.384346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.384690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.384718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.384976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.385004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.385349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.385378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.385730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.385758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.386145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.386185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.386505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.386533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.386878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.386906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.387299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.387329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.387674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.387702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.388052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.388080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.388426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.388456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.388800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.388828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.389178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.389208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.389379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.389406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.389779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.389806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.390156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.390194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.390529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.390557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.390811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.390839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.391205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.391235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.391619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.391647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.391989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.392017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.392396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.392426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.402 qpair failed and we were unable to recover it. 00:40:06.402 [2024-11-28 13:10:36.392772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.402 [2024-11-28 13:10:36.392800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.393172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.393201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.393548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.393577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.393950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.393979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.394299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.394327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.394674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.394701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.395042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.395070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.395404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.395439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.395789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.395816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.396072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.396100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.396465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.396494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.396910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.396938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.397286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.397316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.397640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.397667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.397989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.398017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.398362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.398392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.398746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.398773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.399097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.399125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.399395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.399427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.399689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.399717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.400052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.400079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.400417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.400448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.400871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.400899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.401132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.401169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.401518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.401546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.401902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.401930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.402281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.402310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.402669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.402696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.403045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.403073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.403405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.403 [2024-11-28 13:10:36.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.403 qpair failed and we were unable to recover it. 00:40:06.403 [2024-11-28 13:10:36.403754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.403781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.404127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.404154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.404512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.404541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.404789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.404817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.405156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.405197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.405532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.405560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.405881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.405908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.406179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.406208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.406518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.406546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.406834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.406862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.407200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.407230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.407578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.407606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.407955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.407983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.408319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.408348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.408693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.409078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.409105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.409461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.409491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.409833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.409867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.410204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.410234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.410618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.410646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.410992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.411019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.411364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.411393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.411710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.411738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.412079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.412107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.412478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.412699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.412727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.413078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.413105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.413389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.413419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.404 [2024-11-28 13:10:36.413735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.404 [2024-11-28 13:10:36.413765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.404 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.414112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.414522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.414550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.414902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.414931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.415279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.415308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.415658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.415686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.415940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.415968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.416230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.416259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.416601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.416629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.416986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.417014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.417382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.417412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.417761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.417789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.418114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.418142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.418479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.418508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.418854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.418882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.419197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.419227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.419569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.419599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.419916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.419943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.420323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.420681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.420708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.421057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.421085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.421431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.421460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.421691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.421718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.422051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.422078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.422423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.422452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.422601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.422629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.422978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.423006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.423348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.423377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.423797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.423824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.424143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.405 [2024-11-28 13:10:36.424187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.405 qpair failed and we were unable to recover it. 00:40:06.405 [2024-11-28 13:10:36.424515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.424543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.424776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.424804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.425141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.425179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.425524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.425554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.425897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.425924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.426270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.426299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.426645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.426673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.427018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.427046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.427383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.427413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.427798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.427825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.428166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.428196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.428534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.428561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.428910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.428938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.429287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.429317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.429726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.429753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.430100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.430129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.430522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.430552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.430841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.431108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.431136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.431523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.431553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.431908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.431936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.432199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.432229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.432548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.432576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.432892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.432920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.433270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.406 [2024-11-28 13:10:36.433300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.406 qpair failed and we were unable to recover it. 00:40:06.406 [2024-11-28 13:10:36.433688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.433715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.433965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.433999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.434339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.434368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.434731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.434760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.435131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.435170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.435538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.435566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.435940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.435968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.436214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.436243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.436630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.436658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.436997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.437025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.437312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.437342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.437687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.437715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.438076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.438103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.438445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.438474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.438761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.439089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.439117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.439489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.439518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.439835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.439862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.440211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.440240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.440578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.440607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.440959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.440986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.441321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.441350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.441744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.441771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.442113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.442141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.442420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.442452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.442797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.442826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.443175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 [2024-11-28 13:10:36.443205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.407 [2024-11-28 13:10:36.443431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3680436 Killed "${NVMF_APP[@]}" "$@" 00:40:06.407 [2024-11-28 13:10:36.443460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.407 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.443790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.443818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.444177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.444208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:40:06.408 [2024-11-28 13:10:36.444562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.444590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:06.408 [2024-11-28 13:10:36.444921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.444949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:06.408 [2024-11-28 13:10:36.445282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.445312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:06.408 [2024-11-28 13:10:36.445644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.445673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.408 [2024-11-28 13:10:36.446009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.446037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.446365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.446394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.446619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.446647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.446981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.447010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.447262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.447291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.447662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.447691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.448031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.448059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.448451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.448480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.448900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.448928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.449269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.449299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.449664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.449691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.450046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.450074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.450465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.450495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.450808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.450836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.451172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.451201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.451333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.451363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.451680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.451708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.452066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.452095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.452457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.452487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.452834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.408 [2024-11-28 13:10:36.452862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.408 qpair failed and we were unable to recover it. 00:40:06.408 [2024-11-28 13:10:36.453232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.453262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.453530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.453558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3681448 00:40:06.409 [2024-11-28 13:10:36.453884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.453915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3681448 00:40:06.409 [2024-11-28 13:10:36.454277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.454306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3681448 ']' 00:40:06.409 [2024-11-28 13:10:36.454706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.454734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.409 [2024-11-28 13:10:36.455063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.455092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:06.409 [2024-11-28 13:10:36.455462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.455492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:06.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:06.409 [2024-11-28 13:10:36.455852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.455881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 13:10:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.409 [2024-11-28 13:10:36.456124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.456153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.456417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.456446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.456679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.456707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.457035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.457063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.457380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.457410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.457758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.457787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.457920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.457948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.458263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.458293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.458639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.458676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.459029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.459058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.459420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.459450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.459781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.459810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.460060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.460089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.460350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.460379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.460737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.460770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.460987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.461384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.461748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.409 [2024-11-28 13:10:36.461777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.409 qpair failed and we were unable to recover it. 00:40:06.409 [2024-11-28 13:10:36.462127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.462155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.462490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.462518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.462860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.462888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.463110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.463138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.463305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.463335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.463688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.463717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.464056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.464084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.464504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.464535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.464875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.464904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.465271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.465302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.465647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.465675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.466040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.466069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.466428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.466458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.466809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.466838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.467177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.467207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.467433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.467462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.467807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.467835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.468177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.468207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.468565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.468595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.468863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.468892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.469236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.469267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.469615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.469644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.469989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.470018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.470354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.470384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.470726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.470754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.471094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.471122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.471505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.471535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.410 [2024-11-28 13:10:36.471896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.410 [2024-11-28 13:10:36.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.410 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.472279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.472307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.472675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.472703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.473045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.473074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.473403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.473434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.473776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.474137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.474200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.474572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.474602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.474971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.474999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.475325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.475355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.475669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.475699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.476083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.476111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.476430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.476460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.476812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.476840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.477093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.477121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.477519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.477550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.477918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.477946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.478308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.478338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.478668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.478697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.479063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.479090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.479351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.479392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.479621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.479648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.479992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.480019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.480268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.480297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.480645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.480673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.480916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.480948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.483347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.483415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.483708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.483745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.484143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.484189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.484543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.484572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.484811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.484840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.485192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.411 [2024-11-28 13:10:36.485224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.411 qpair failed and we were unable to recover it. 00:40:06.411 [2024-11-28 13:10:36.485592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.485621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.485976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.486005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.486379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.486410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.486738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.486765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.486964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.486994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.487488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.487518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.487675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.487702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.488057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.488086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.488451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.488481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.488845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.488873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.489226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.489257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.489630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.489659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.490031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.490059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.412 [2024-11-28 13:10:36.490325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.412 [2024-11-28 13:10:36.490356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.412 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.490587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.490619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.491014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.491044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.491281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.491311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.491664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.491693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.492058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.492087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.492461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.492492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.492854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.492883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.493233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.493264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.493594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.493623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.494000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.494029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.494380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.494411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.494762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.494790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.495039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.495068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.495233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.495263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.495654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.496000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.692 [2024-11-28 13:10:36.496028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.692 qpair failed and we were unable to recover it. 00:40:06.692 [2024-11-28 13:10:36.496276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.496305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.496656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.496683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.497020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.497049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.497392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.497422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.497772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.497800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.498051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.498079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.498418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.498448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.498804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.498833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.499045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.499074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.499420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.499449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.499807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.499835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.500189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.500217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.500615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.500643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.501001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.501030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.501375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.501653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.501681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.502027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.502055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.502395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.502425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.502786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.502814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.503174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.503204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.503568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.503597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.503833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.503866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.504204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.504233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.504578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.504606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.504962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.504990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.505350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.505380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.505748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.505777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.506024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.506052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.506442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.506475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.506805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.693 [2024-11-28 13:10:36.506834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.693 qpair failed and we were unable to recover it. 00:40:06.693 [2024-11-28 13:10:36.507238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.507267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.507598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.507626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.507989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.508017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.508370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.508400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.508702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.508730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.509100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.509128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.509442] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:40:06.694 [2024-11-28 13:10:36.509495] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:06.694 [2024-11-28 13:10:36.509491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.509520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.509887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.509915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.510276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.510306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.510673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.510701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.511044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.511072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.511436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.511465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.511825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.511854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.512211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.512240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.512610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.512638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.512997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.513025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.513381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.513411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.513827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.513855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.514212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.514243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.514597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.514626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.514987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.515022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.515372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.515402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.515657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.515686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.516042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.516071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.516424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.516454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.516697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.516726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.517082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.517110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.517543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.517573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.517920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.517950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.694 [2024-11-28 13:10:36.518082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.694 [2024-11-28 13:10:36.518110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.694 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.518474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.518505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.518810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.518838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.519202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.519233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.519503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.519537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.519754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.519783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.520075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.520104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.520487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.520518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.520875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.520904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.521280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.521310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.521545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.521578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.521936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.521965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.522301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.522332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.522700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.522729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.522964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.522992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.523354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.523384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.523748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.523777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.524151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.524192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.524565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.524594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.524955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.524984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.525224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.525253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.525621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.525649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.525997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.526026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.526382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.526411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.526782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.526810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.527175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.527205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.527546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.527575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.527917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.527945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.695 [2024-11-28 13:10:36.528301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.695 [2024-11-28 13:10:36.528330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.695 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.528581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.528608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.528835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.528864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.529214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.529250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.529589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.529616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.529835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.529864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.530143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.530182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.530518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.530545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.530896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.530924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.531280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.531309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.531674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.531701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.532058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.532086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.532457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.532487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.532790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.532817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.533148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.533188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.533525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.533554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.533885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.533912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.534274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.534305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.534555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.534934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.534962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.535239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.535271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.535642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.535670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.536039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.536067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.536313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.536346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.536702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.536731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.537096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.537124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.537478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.537507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.537889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.537917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.538281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.538310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.538676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.538705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.696 qpair failed and we were unable to recover it. 00:40:06.696 [2024-11-28 13:10:36.539083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.696 [2024-11-28 13:10:36.539111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.539476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.539506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.539870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.539898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.540135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.540175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.540517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.540545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.540907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.540934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.541303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.541333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.541691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.541719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.541935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.541963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.542343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.542373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.542744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.542772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.543010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.543039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.543383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.543413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.543776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.543810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.544179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.544209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.544461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.544490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.544754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.544781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.545141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.545200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.545597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.545625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.545983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.546010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.546367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.546398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.546735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.546762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.547121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.547148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.547515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.547544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.547902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.547931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.548312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.548341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.548706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.548734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.549093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.549122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.549372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.549402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.549768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.549797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.550191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.697 [2024-11-28 13:10:36.550222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.697 qpair failed and we were unable to recover it. 00:40:06.697 [2024-11-28 13:10:36.550574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.550603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.550871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.550898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.551137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.551176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.551414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.551442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.551791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.551818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.552187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.552217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.552574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.552601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.552852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.552880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.553224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.553255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.553611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.553642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.553970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.553998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.554387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.554417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.554764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.554791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.555151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.555191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.555545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.555573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.555951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.555979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.556212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.556245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.556585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.556613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.556969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.556997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.557339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.557368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.557738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.557766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.558016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.558047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.558404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.698 [2024-11-28 13:10:36.558440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.698 qpair failed and we were unable to recover it. 00:40:06.698 [2024-11-28 13:10:36.558790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.558819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.559180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.559210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.559554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.559582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.559958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.559986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.560349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.560380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.560615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.560647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.560912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.561275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.561305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.561561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.561589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.561945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.561973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.562276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.562306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.562726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.562754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.563085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.563112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.563477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.563508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.563874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.563901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.564176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.564207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.564553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.564581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.564943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.564973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.565328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.565358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.565711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.565740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.566096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.566123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.566504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.566534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.566891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.566919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.567267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.567297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.567661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.567690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.567943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.567971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.568328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.568358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.568713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.699 [2024-11-28 13:10:36.568742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.699 qpair failed and we were unable to recover it. 00:40:06.699 [2024-11-28 13:10:36.569155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.569198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.569565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.569593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.569951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.569980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.570336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.570366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.570830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.571171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.571202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.571531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.571561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.571924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.571952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.572250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.572280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.572649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.572677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.573038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.573066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.573410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.573447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.573697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.573726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.574084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.574112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.574489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.574519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.574879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.574908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.575265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.575296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.575651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.575680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.576031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.576061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.576413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.576443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.576798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.576826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.577177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.577207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.577559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.577588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.577950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.577978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.578338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.578367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.578720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.578749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.578948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.579315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.579346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.579717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.579746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.580105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.700 [2024-11-28 13:10:36.580133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.700 qpair failed and we were unable to recover it. 00:40:06.700 [2024-11-28 13:10:36.580490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.580519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.580758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.580790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.581137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.581175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.581530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.581560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.581916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.581944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.582302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.582332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.582711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.583077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.583105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.583451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.583482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.583779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.583809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.584181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.584211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.584573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.584601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.584949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.584978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.585344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.585374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.585730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.585760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.586112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.586141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.586517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.586546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.586912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.586940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.587305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.587335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.587682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.588048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.588076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.588438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.588842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.588871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.589313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.589343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.589691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.589719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.590086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.590114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.590484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.590515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.590755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.590787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.591129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.591168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.591543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.701 [2024-11-28 13:10:36.591571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.701 qpair failed and we were unable to recover it. 00:40:06.701 [2024-11-28 13:10:36.591926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.591955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.592310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.592340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.592694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.592721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.593130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.593169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.593524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.593553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.593927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.593956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.594314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.594344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.594586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.594619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.594965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.594995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.595335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.595365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.595726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.595755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.596118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.596146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.596515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.596544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.596895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.596923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.597291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.597320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.597683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.597711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.598071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.598101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.598467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.598498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.598855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.598890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.599223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.599253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.599644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.599672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.600031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.600059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.600331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.600360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.600715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.600743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.600995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.601023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.601386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.601416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.601671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.601700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.602050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.602078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.702 [2024-11-28 13:10:36.602441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.702 [2024-11-28 13:10:36.602472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.702 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.602817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.602845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.603214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.603243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.603586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.603613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.603976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.604005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.604364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.604393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.604754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.604784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.605151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.605196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.605628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.605656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.605911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.605941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.606298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.606330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.606694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.606723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.606959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.606989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.607338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.607368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.607739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.607768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.608120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.608149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.608489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.608518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.608889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.608919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.609279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.609310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.609676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.609705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.610066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.610094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.610344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.610377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.610735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.610764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.611212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.611242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.611578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.611606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.611966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.611994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.612336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.612366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.612722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.612752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.703 qpair failed and we were unable to recover it. 00:40:06.703 [2024-11-28 13:10:36.613111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.703 [2024-11-28 13:10:36.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.613562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.613592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.613837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.613875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.614218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.614249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.614644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.614673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.615027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.615057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.615398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.615429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.615786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.615814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.616177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.616208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.616544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.616571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.616960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.617251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.617280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.617614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.617641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.617996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.618024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.618381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.618748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.618777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.619139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.619177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.619425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.619453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.619838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.619867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.620233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.620264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.620619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.620647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.621002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.621030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.621391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.621420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.621775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.621803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.622190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.622220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.704 [2024-11-28 13:10:36.622510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.704 [2024-11-28 13:10:36.622539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.704 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.622827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.622855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.623224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.623254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.623604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.623633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.623993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.624022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.624380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.624409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.624756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.624784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.625151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.625529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.625557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.625927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.625955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.626213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.626242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.626593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.626620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.626875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.626903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.627310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.627340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.627699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.627728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.627969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.627998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.628301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.628330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.628554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.628594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.628864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.628894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.629255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.629285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.629652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.629681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.630045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.630073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.630326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.630356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.630723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.630751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.631114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.631144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.631519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.631548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.631746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.631775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.632139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.632179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.632438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.632470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.632827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.632856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.633217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.633250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.633671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.705 [2024-11-28 13:10:36.633701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.705 qpair failed and we were unable to recover it. 00:40:06.705 [2024-11-28 13:10:36.634066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.634094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.634462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.634491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.634854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.634883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.635246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.635276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.635659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.635687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.636050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.636079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.636439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.636468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.636830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.636858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.637221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.637252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.637608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.637636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.637939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.637969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.638325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.638355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.638721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.638750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.638990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.639020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.639400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.639430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.639782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.639812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.640207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.640584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.640613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.640976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.641006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.641384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.641415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.641789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.641817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.642184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.642613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.642643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.642985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.643014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.643355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.643385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.643755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.643790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.644037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.644066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.644451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.644481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.644824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.706 [2024-11-28 13:10:36.644854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.706 qpair failed and we were unable to recover it. 00:40:06.706 [2024-11-28 13:10:36.645223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.645254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.645662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.645693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.646046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.646075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.646435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.646465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.646723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.646751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.647092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.647122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.647427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.647458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.647838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.647867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.648114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.648146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.648532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.648562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.648807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.648836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.649192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.649223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.649594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.649622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.649995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.650378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.650408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.650766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.650795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.651154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.651210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.651473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.651502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.651866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.651895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.652250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.652281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.652531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.652561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.652814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.652843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.653205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.653236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.653277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:06.707 [2024-11-28 13:10:36.653615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.653646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.654003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.654032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.654469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.654499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.654856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.654883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.655187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.655216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.655496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.707 [2024-11-28 13:10:36.655525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.707 qpair failed and we were unable to recover it. 00:40:06.707 [2024-11-28 13:10:36.655913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.655941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.656314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.656344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.656702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.656730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.657095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.657122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.657539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.657569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.657947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.657976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.658193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.658223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.658598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.658627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.658892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.658920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.659356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.659385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.659745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.659773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.660137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.660190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.660434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.660466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.660851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.660880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.661239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.661268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.661643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.661670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.661922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.661950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.662307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.662337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.662579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.662607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.662957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.662985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.663368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.663399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.663792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.664216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.664247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.664505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.664533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.664883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.664911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.665275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.665305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.665658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.665686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.666050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.666079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.666446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.666476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.666907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.666934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.708 qpair failed and we were unable to recover it. 00:40:06.708 [2024-11-28 13:10:36.667238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.708 [2024-11-28 13:10:36.667267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.667634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.667663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.668036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.668063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.668399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.668435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.668784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.668812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.669121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.669149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.669529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.669558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.669930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.669959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.670211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.670241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.670600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.670628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.671003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.671031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.671382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.671411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.671774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.671803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.672176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.672207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.672618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.672645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.673010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.673037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.673381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.673774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.673802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.674037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.674065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.674433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.674463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.674840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.674868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.675232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.675260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.675637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.675665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.676035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.676062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.676414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.676443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.676825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.676853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.677248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.677653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.677681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.709 qpair failed and we were unable to recover it. 00:40:06.709 [2024-11-28 13:10:36.677902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.709 [2024-11-28 13:10:36.677929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.678301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.678330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.678648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.678677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.679062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.679090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.679462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.679491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.679832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.679860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.680222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.680251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.680658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.680685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.680934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.680963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.681365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.681394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.681753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.681780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.682147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.682187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.682530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.682559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.682927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.682955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.683325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.683356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.683732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.683769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.684172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.684202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.684536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.684564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.684795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.684822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.685216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.685456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.685833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.685861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.686226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.686255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.686600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.686629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.686878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.686906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.710 qpair failed and we were unable to recover it. 00:40:06.710 [2024-11-28 13:10:36.687180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.710 [2024-11-28 13:10:36.687210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.687582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.687609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.687987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.688014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.688259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.688290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.688661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.688690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.689043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.689070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.689411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.689440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.689808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.689836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.690198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.690227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.690580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.690607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.690970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.690999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.691383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.691413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.691777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.691805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.692182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.692212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.692609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.692637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.693000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.693027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.693389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.693419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.693782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.693812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.694185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.694214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.694485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.694512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.694766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.694799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.695182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.695212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.695589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.695617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.695967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.695994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.696250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.696285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.696652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.696681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.696899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.696928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.697307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.697337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.711 [2024-11-28 13:10:36.697738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.711 [2024-11-28 13:10:36.697765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.711 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.698133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.698173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.698436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.698471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.698849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.698877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.699240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.699269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.699646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.699673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.700047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.700076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.700444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.700474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.700817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.700844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.701213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.701243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.701607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.701635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.701995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.702022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.702380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.702409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.702771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.702800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.703178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.703207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.703569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.703596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.703969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.703997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.704252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.704281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.704638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.704666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.705030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.705058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.705432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.705462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.705831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.705858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.706124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.706152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.706537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.706566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.706916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.706944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.707311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.707340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.707706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.707734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.708096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.708123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.708484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.708513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.712 [2024-11-28 13:10:36.708874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.712 [2024-11-28 13:10:36.708903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.712 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.709276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.709307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.709677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.709705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.709936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.709964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.710336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.710365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.710719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.710746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.711112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.711140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.711500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.711529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.711913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.711941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.712315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.712344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.712634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.712663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.713030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.713057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.713446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.713476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.713838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.713871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.714214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 [2024-11-28 13:10:36.714357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.713 [2024-11-28 13:10:36.714386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:06.713 qpair failed and we were unable to recover it. 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 [2024-11-28 13:10:36.715187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Read completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.713 Write completed with error (sct=0, sc=8) 00:40:06.713 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 [2024-11-28 13:10:36.715602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Write completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 Read completed with error (sct=0, sc=8) 00:40:06.714 starting I/O failed 00:40:06.714 [2024-11-28 13:10:36.716009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:40:06.714 [2024-11-28 13:10:36.716634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.716741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.717151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.717208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.717678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.717783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.718099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.718134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.718514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.718545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.718884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.718912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.719447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.719551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.720004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.720041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.720389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.720420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.720804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.720832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.721183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.721226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.721600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.721630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.721973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.722001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.722360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.722390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.722738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.722767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.723122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.723150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.723551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.723581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.723970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.723999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.724328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.724359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.724703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.724731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.725092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.725120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.725462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.714 [2024-11-28 13:10:36.725492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.714 qpair failed and we were unable to recover it. 00:40:06.714 [2024-11-28 13:10:36.725848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.725876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.726237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.726267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.726658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.726686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.727066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.727094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.727462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.727492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.727876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.728240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.728271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.728622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.728650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.728958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.728986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.729211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.729583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.729612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.729993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.730021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.730423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.730452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.730816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.730844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.731207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.731235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.731620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.731655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.732010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.732041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.732310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.732340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.732705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.732734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.733098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.733127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.733470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.733499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.733853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.733882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.734261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.734291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.734637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.734664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.735040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.735068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.735425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.735454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.735819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.735848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.715 qpair failed and we were unable to recover it. 00:40:06.715 [2024-11-28 13:10:36.736213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.715 [2024-11-28 13:10:36.736244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.736610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.736639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.737016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.737047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.737382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.737412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.737666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.737694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.738061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.738091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.738455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.738487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.738837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.738866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.739283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.739312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.739687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.739716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.740080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.740109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.740467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.740497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.740758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.740787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.741167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.741198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.741568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.741596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.741964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.742381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.742413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.742775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.742805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.743153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.743192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.743535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.743564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.743922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.743950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.744150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:06.716 [2024-11-28 13:10:36.744200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:06.716 [2024-11-28 13:10:36.744208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:06.716 [2024-11-28 13:10:36.744215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:06.716 [2024-11-28 13:10:36.744222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:06.716 [2024-11-28 13:10:36.744315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.716 [2024-11-28 13:10:36.744344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.716 qpair failed and we were unable to recover it. 00:40:06.716 [2024-11-28 13:10:36.744711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.744738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.745083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.745111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.745358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.745387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.745809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.745836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.746204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.746233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.746247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:06.717 [2024-11-28 13:10:36.746653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.746682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.746601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:06.717 [2024-11-28 13:10:36.746750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:06.717 [2024-11-28 13:10:36.746754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:06.717 [2024-11-28 13:10:36.746959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.746986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.747335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.747365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.747722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.747750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.748121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.748149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.748510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.748539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.748795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.748829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.749077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.749107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.749383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.749417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.749797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.749826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.750099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.750461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.750491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.750851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.750880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.751169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.751200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.751472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.751501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.751751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.751780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.752124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.752152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.752527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.752556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.752924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.752952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.753333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.753362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.753645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.753672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.754040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.754069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.754328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.754358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.754704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.754732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.755101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.755532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.755562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.755922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.717 [2024-11-28 13:10:36.755957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.717 qpair failed and we were unable to recover it. 00:40:06.717 [2024-11-28 13:10:36.756208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.756237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.756593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.756621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.757011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.757039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.757422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.757739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.758101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.758129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.758366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.758397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.758646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.758674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.759024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.759053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.759418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.759449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.759685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.759715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.760008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.760036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.760444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.760473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.760851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.760880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.761088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.761118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.761509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.761539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.761911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.761939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.762308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.762339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.762676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.762704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.763080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.763108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.763469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.763500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.763871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.763899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.764275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.764305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.764530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.764559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.764972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.764999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.765331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.765360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.765718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.765760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.766101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.766130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.766474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.766871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.766899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.767157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.718 [2024-11-28 13:10:36.767194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.718 qpair failed and we were unable to recover it. 00:40:06.718 [2024-11-28 13:10:36.767462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.767490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.767857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.767885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.768255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.768286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.768648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.768676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.768990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.769017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.769260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.769290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.769542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.769575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.769936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.770233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.770265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.770629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.770994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.771021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.771294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.771325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.771710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.771738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.772104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.772132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.772506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.772536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.772898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.772926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.773170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.773200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.773538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.773566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.773788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.773816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.774205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 [2024-11-28 13:10:36.774367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.719 [2024-11-28 13:10:36.774399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.719 qpair failed and we were unable to recover it. 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Write completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Write completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Write completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Write completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Write completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Write completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.719 Read completed with error (sct=0, sc=8) 00:40:06.719 starting I/O failed 00:40:06.720 Read completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Write completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Read completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Read completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Write completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Read completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Write completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Read completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 Read completed with error (sct=0, sc=8) 00:40:06.720 starting I/O failed 00:40:06.720 [2024-11-28 13:10:36.775227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:40:06.720 [2024-11-28 13:10:36.775762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.775884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.776093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.776129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.776565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.776596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.776956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.776984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.777224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.777277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.777513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.777541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.777920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.777948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.778219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.778248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.778491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.778529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.778913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.778942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.779334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.779363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.779728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.779756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.780055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.780083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.780510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.780538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.780750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.780778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.781134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.781170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.781560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.781588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.781958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.781987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.782297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.782327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.782673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.782702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.782971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.783003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.783253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.783282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.783614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.783643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.784012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.784040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.784389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.784419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.784664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.784692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.785060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.785089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.720 qpair failed and we were unable to recover it. 00:40:06.720 [2024-11-28 13:10:36.785457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.720 [2024-11-28 13:10:36.785485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.785851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.785879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.786220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.786250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.786603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.786631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.786986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.787014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.787377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.787407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.787688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.787716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.788056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.788449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.788484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.788813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.788841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.789205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.789234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.789614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.789642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.789964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.789992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.790331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.790360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.790742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.790771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.791130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.791170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.791500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.791528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.791910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.791938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.792316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.792345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.792554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.792581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.792920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.792948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.793330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.793359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.793587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.793619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.793845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.793873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.794146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.794181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.794520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.794549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.794904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.794932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.721 qpair failed and we were unable to recover it. 00:40:06.721 [2024-11-28 13:10:36.795170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.721 [2024-11-28 13:10:36.795199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.795427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.795458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.795812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.795841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.796204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.796234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.796621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.796650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.797015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.797044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.797388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.797417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.797814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.797842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.798180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.798209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.798552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.798581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:06.722 [2024-11-28 13:10:36.798796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.722 [2024-11-28 13:10:36.798824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:06.722 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.799176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.799207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.799588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.799617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.799974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.800003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.800375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.800405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.800760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.800788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.801019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.801047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.801287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.801317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.801689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.802094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.802122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.002 [2024-11-28 13:10:36.802540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.002 [2024-11-28 13:10:36.802568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.002 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.802753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.802781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.803118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.803152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.803366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.803394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.803755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.803784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.804167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.804197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.804409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.804438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.804815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.804843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.805224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.805253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.805594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.805622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.805986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.806387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.806417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.806777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.806806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.807029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.807425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.807454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.807810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.807839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.808210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.808240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.808468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.808499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.808726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.808754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.809155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.809194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.809574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.809602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.809931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.809960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.810215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.810245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.810613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.810641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.811013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.811040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.811404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.811433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.811725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.811753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.812059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.812087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.812311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.812340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.812582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.812617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.003 [2024-11-28 13:10:36.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.003 qpair failed and we were unable to recover it. 00:40:07.003 [2024-11-28 13:10:36.813232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.813523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.813555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.813883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.813911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.814288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.814317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.814547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.814574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.814943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.814971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.815336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.815366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.815722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.815751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.816099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.816127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.816350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.816380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.816624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.816651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.816962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.816991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.817380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.817409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.817601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.817628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.817988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.818015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.818108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.818134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.818554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.818583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.818837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.818864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.819206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.819236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.819555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.819583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.819947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.819975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.820409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.820438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.820797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.820825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.821156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.821193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.821554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.821582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.821959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.821986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.822221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.822253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.822615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.822644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.004 [2024-11-28 13:10:36.822857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.004 [2024-11-28 13:10:36.822885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.004 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.823252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.823282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.823663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.823691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.824059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.824087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.824370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.824398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.824770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.824797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.825035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.825062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.825405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.825436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.825663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.825690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.826056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.826084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.826313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.826342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.826707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.826741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.827093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.827122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.827347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.827377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.827723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.827751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.828126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.828153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.828516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.828544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.828906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.828933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.829305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.829335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.829704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.829732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.830016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.830044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.830268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.830297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.830679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.830706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.831070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.831098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.831351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.831384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.831691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.831720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.832062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.832090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.832459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.832488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.832854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.832882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.833238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.833268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.833534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.833562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.833886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.833914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.834294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.005 [2024-11-28 13:10:36.834323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.005 qpair failed and we were unable to recover it. 00:40:07.005 [2024-11-28 13:10:36.834688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.834715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.835075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.835102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.835316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.835345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.835573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.835967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.835995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.836209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.836243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.836603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.836630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.837001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.837029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.837384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.837414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.837511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.837537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.837996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.838093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.838570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.838675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.838984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.839019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.839437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.839541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.839951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.839987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.840439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.840541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.840994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.841029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.841381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.841412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.841802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.841830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.842205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.842236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.842585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.842613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.842995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.843023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.843689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.843719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.844084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.844111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.844372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.844402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.844803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.844831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.845198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.845227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.845484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.845512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.845884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.845912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.006 [2024-11-28 13:10:36.846260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.006 [2024-11-28 13:10:36.846289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.006 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.846672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.846700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.847055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.847083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.847321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.847356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.847704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.847732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.848095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.848123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.848538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.848568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.848782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.848810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.849038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.849065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.849461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.849489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.849854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.849882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.850109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.850137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.850547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.850579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.850942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.850970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.851328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.851357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.851722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.852110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.852137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.852362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.852391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.852765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.852793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.853205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.853234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.853553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.853580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.853812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.853840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.854092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.854120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.854524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.854554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.854919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.854947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.855321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.855351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.855623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.855651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.855994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.856022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.856238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.856267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.856655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.007 [2024-11-28 13:10:36.856683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.007 qpair failed and we were unable to recover it. 00:40:07.007 [2024-11-28 13:10:36.857059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.857087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.857421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.857820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.857848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.858254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.858284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.858674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.858702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.859063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.859092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.859464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.859493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.859729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.859756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.860006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.860034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.860403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.860431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.860777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.860804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.861176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.861205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.861570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.861597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.861857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.861899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.862251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.862281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.862513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.862546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.862796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.862825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.863199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.863228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.863605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.863633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.863859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.863887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.864239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.864267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.864637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.008 [2024-11-28 13:10:36.864664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.008 qpair failed and we were unable to recover it. 00:40:07.008 [2024-11-28 13:10:36.865028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.865056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.865398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.865426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.865666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.865693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.866043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.866070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.866418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.866446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.866820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.866849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.867216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.867245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.867630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.867657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.868023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.868051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.868408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.868437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.868807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.868835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.869202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.869230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.869558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.869585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.869963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.869990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.870202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.870232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.870457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.870484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.870866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.870893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.871260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.871290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.871655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.871684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.872058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.872087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.872326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.872355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.872696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.872723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.872987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.873015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.873237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.873265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.873614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.873643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.874008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.874036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.874466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.874495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.874858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.874885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.875256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.875285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.009 qpair failed and we were unable to recover it. 00:40:07.009 [2024-11-28 13:10:36.875508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.009 [2024-11-28 13:10:36.875536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.875795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.875822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.876075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.876110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.876435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.876465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.876728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.876755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.876850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.876877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.877236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.877264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.877634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.877661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.877875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.877903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.878255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.878283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.878640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.878668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.879028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.879055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.879417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.879446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.879707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.879735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.880078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.880105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.880330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.880359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.880605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.880638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.881022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.881049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.881415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.881444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.881820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.881847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.882061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.882088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.882350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.882380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.882702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.882730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.883054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.883081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.883458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.883486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.883874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.883902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.884140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.884175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.884550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.884577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.884918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.010 [2024-11-28 13:10:36.884947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.010 qpair failed and we were unable to recover it. 00:40:07.010 [2024-11-28 13:10:36.885173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.885202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.885541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.885570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.885935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.885964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.886306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.886335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.886698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.886727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.887109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.887136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.887372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.887401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.887771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.887798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.888173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.888202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.888421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.888449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.888703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.888731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.889060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.889088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.889459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.889488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.889699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.889735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.890108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.890136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.890403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.890435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.890807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.890835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.891204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.891233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.891462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.891490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.891765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.891793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.892177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.892205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.892442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.892470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.892841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.892869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.893250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.893278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.893647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.893675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.893947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.893974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.894340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.894368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.894760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.894788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.011 [2024-11-28 13:10:36.895154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.011 [2024-11-28 13:10:36.895193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.011 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.895592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.895987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.896014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.896240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.896269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.896631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.896658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.897045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.897073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.897431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.897460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.897843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.897870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.898264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.898293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.898536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.898564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.898916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.898943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.899319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.899348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.899607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.899636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.899997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.900024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.900382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.900411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.900675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.900702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.901075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.901103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.901479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.901509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.901868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.901895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.902264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.902293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.902670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.902698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.903073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.903100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.903468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.903497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.903856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.903884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.904252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.904281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.904642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.904677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.905022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.905051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.905266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.905297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.905671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.012 [2024-11-28 13:10:36.905698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.012 qpair failed and we were unable to recover it. 00:40:07.012 [2024-11-28 13:10:36.906064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.906092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.906307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.906336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.906715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.906742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.906964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.906992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.907337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.907366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.907721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.907748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.908119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.908146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.908516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.908546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.908810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.908838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.909066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.909094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.909519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.909549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.909889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.909917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.910260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.910289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.910659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.910687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.910787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.910814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.911308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.911415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.911880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.911915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.912183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.912214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.912623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.912727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.913141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.913199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.913367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.913397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.913785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.913814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.914123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.914151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.914535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.914585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.914978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.915007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.915220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.915252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.915612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.915640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.915869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.915897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.916272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.916301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.013 [2024-11-28 13:10:36.916665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.013 [2024-11-28 13:10:36.916693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.013 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.916924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.916953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.917306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.917335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.917551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.917578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.917961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.917989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.918336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.918370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.918719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.918748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.918967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.918996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.919386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.919417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.919782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.919810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.920195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.920224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.920598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.920626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.920992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.921019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.921408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.921437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.921818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.921845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.922257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.922286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.922644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.922672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.923027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.923055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.923435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.923608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.923636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.923986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.924014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.924366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.924401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.924648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.924676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.014 [2024-11-28 13:10:36.925040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.014 [2024-11-28 13:10:36.925068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.014 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.925424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.925453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.925822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.925850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.926219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.926250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.926652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.926679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.927038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.927066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.927403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.927432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.927801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.927829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.928171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.928200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.928567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.928595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.928842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.928870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.929243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.929273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.929652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.929682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.930045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.930073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.930502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.930531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.930786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.930821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.931180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.931210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.931549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.931577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.931948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.931977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.932205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.932236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.932594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.932621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.933001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.933029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.933397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.933426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.933822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.933849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.934238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.934268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.934677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.935043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.935071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.935346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.935376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.935657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.935685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.936052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.936080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.936413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.936443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.936814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.937207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.937236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.015 [2024-11-28 13:10:36.937657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.015 [2024-11-28 13:10:36.937684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.015 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.938054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.938082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.938461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.938490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.938761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.938790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.939137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.939173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.939504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.939531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.939903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.939937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.940207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.940242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.940574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.940602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.940847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.940875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.941244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.941273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.941549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.941576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.941931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.941960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.942325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.942355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.942704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.942732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.942967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.942995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.943342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.943371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.943731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.943759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.944132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.944170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.944462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.944490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.944765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.944793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.945139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.945175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.945546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.945574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.945950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.945977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.946192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.946228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.946458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.946486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.946833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.946861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.947198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.947227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.947630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.016 [2024-11-28 13:10:36.948012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.016 [2024-11-28 13:10:36.948040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.016 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.948419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.948448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.948799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.948826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.949206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.949234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.949583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.949617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.949945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.949973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.950145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.950222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.950609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.950638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.950995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.951023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.951381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.951412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.951771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.951798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.952168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.952198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.952556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.952584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.952953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.952980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.953338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.953367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.953735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.954137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.954184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.954547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.954575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.954803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.954835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.955216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.955246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.955639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.955667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.955910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.955940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.956286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.956316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.956668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.956696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.957065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.957092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.957475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.957504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.957884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.957911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.958298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.958327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.958594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.958622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.958972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.959000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.959278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.959308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.959658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.017 [2024-11-28 13:10:36.959685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.017 qpair failed and we were unable to recover it. 00:40:07.017 [2024-11-28 13:10:36.960002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.960031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.960332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.960361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.960739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.960767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.961140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.961176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.961390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.961419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.961824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.961852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.962222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.962251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.962623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.962650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.963019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.963046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.963439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.963469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.963693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.963721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.964075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.964103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.964482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.964511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.964893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.964928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.965135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.965173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.965539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.965568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.965940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.965968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.966225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.966255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.966609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.966636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.967007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.967035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.967380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.967411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.967769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.967796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.968173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.968202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.968549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.968577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.969013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.969040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.969416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.969445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.969807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.969836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.970201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.970230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.970622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.970649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.971013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.971041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.971289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.971319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.018 qpair failed and we were unable to recover it. 00:40:07.018 [2024-11-28 13:10:36.971725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.018 [2024-11-28 13:10:36.971752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.971973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.972001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.972368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.972396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.972764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.972791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.973155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.973193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.973580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.973806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.973833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.974176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.974205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.974425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.974458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.974697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.974725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.975106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.975135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.975369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.975398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.975716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.975743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.976066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.976093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.976454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.976485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.976829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.976856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.977229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.977258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.977656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.977684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.978048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.978076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.978177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.978205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.978449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ebe70 is same with the state(6) to be set 00:40:07.019 [2024-11-28 13:10:36.979100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.979209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.979498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.979531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.979900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.979929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.980480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.980585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.980995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.981030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.981236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.981267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.981520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.981549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.981900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.981928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.982289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.982318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.982716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.983064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.983092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.019 [2024-11-28 13:10:36.983473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.019 [2024-11-28 13:10:36.983502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.019 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.983717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.983745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.984093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.984122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.984523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.984554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.984768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.984796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.985189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.985220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.985618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.985646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.985937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.985965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.986315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.986344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.986731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.986759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.986858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.986886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.987287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.987317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.987539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.987568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.987851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.987886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.988265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.988295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.988561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.988588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.988974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.989002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.989248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.989282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.989632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.989668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.990016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.990044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.990287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.990317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.990719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.990747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.991108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.991135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.991543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.020 [2024-11-28 13:10:36.991572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.020 qpair failed and we were unable to recover it. 00:40:07.020 [2024-11-28 13:10:36.991839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.991866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.992167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.992401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.992428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.992827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.992855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.993223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.993252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.993477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.993508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.993720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.993747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.994000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.994028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.994454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.994485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.994729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.994757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.995112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.995140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.995575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.995606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.995964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.995992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.996239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.996268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.996614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.996641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.996937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.996964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.997339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.997368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.997744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.997771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.997869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.997896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.998291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.998320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.998544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.998576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.998966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.998994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.999287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.999317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.999684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.999711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:36.999923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:36.999950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:37.000177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:37.000207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:37.000599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:37.000627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:37.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:37.000861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:37.001120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.021 [2024-11-28 13:10:37.001154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.021 qpair failed and we were unable to recover it. 00:40:07.021 [2024-11-28 13:10:37.001567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.001596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.001945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.001973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.002202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.002231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.002502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.002535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.002922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.002950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.003317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.003354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.003722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.003750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.004065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.004092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.004455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.004484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.004708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.004737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.005122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.005151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.005408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.005435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.005680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.005708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.006074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.006102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.006349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.006678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.006705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.007068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.007097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.007329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.007359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.007742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.007769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.007993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.008021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.008357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.008385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.008739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.008767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.009138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.009172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.009522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.009550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.009932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.009959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.010306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.010335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.010709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.010738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.011095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.011123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.011355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.022 [2024-11-28 13:10:37.011384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.022 qpair failed and we were unable to recover it. 00:40:07.022 [2024-11-28 13:10:37.011656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.011685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.012027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.012055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.012399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.012428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.012651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.012679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.013045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.013073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.013356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.013385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.013754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.013782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.014133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.014168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.014428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.014460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.014824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.014852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.015215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.015244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.015546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.015573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.015937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.015965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.016345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.016374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.016706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.016734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.017082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.017109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.017476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.017878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.017906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.018121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.018149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.018369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.018397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.018736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.018765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.019023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.019051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.019418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.019447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.019706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.019734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.020114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.020141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.020502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.020530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.020886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.020913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.021261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.021290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.021650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.021677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.021991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.022018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.022366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.022396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.022615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.023 [2024-11-28 13:10:37.022643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.023 qpair failed and we were unable to recover it. 00:40:07.023 [2024-11-28 13:10:37.022942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.022969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.023340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.023693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.023721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.024063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.024091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.024448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.024477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.024697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.024725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.025022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.025050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.025427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.025455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.025664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.025692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.026022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.026051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.026380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.026409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.026750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.026779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.027014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.027042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.027259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.027288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.027643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.027671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.027984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.028012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.028378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.028407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.028768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.028796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.029148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.029183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.029443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.029792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.029820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.030170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.030198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.030411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.030438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.030787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.030815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.031146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.031202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.031513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.031541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.031842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.031870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.032119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.024 [2024-11-28 13:10:37.032150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.024 qpair failed and we were unable to recover it. 00:40:07.024 [2024-11-28 13:10:37.032379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.032408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.032667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.033051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.033079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.033422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.033450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.033799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.033827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.034184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.034213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.034554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.034581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.034980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.035007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.035360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.035389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.035710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.035739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.035965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.035994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.036197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.036226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.036605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.036633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.036977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.037006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.037112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.037142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.037615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.037707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.037936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.037976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.038324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.038358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.038698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.038728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.038980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.039008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.039418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.039450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.039799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.039827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.040177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.040207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.040572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.040601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.040833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.040861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.041133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.041171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.041497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.041525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.041885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.041912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.042112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.042140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.042531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.042560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.042752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.025 [2024-11-28 13:10:37.042780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.025 qpair failed and we were unable to recover it. 00:40:07.025 [2024-11-28 13:10:37.043120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.043148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.043383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.043413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.043661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.043693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.044055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.044083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.044444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.044474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.044820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.044848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.045200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.045229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.045627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.045654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.045925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.045952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.046151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.046187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.046524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.046552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.046881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.046911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.047236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.047266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.047619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.047647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.047995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.048023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.048399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.048429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.048772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.048800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.049135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.049170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.049401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.049429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.049778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.049818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.050134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.050182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.050415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.050615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.050642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.051015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.051043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.051293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.051323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.051657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.051685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.052036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.026 [2024-11-28 13:10:37.052064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.026 qpair failed and we were unable to recover it. 00:40:07.026 [2024-11-28 13:10:37.052274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.052304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.052511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.052539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.052888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.052915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.053273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.053302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.053518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.053546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.053776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.053807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.054094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.054123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.054479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.054508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.054860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.054887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.055251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.055646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.055674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.056022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.056049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.056394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.056424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.056770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.056798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.057053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.057085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.057317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.057346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.057692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.057719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.058080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.058107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.058340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.058369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.058711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.058739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.059077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.059105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.059445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.059474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.059828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.059856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.060200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.060230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.060572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.060600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.060940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.060968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.061250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.061489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.061516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.027 [2024-11-28 13:10:37.061762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.027 [2024-11-28 13:10:37.061794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.027 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.062167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.062196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.062536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.062564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.062907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.062935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.063285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.063314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.063637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.063671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.064029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.064057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.064407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.064437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.064776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.064804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.065134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.065169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.065258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.065285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.065712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.065740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.066068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.066096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.066364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.066393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.066761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.066789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.067146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.067518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.067893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.067921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.068285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.068314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.068678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.068707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.069063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.069091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.069445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.069475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.069731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.069758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.070144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.070180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.070527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.070555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.070889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.070916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.071267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.071296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.071649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.071677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.072032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.072059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.072406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.072436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.028 qpair failed and we were unable to recover it. 00:40:07.028 [2024-11-28 13:10:37.072667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.028 [2024-11-28 13:10:37.072695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.073041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.073068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.073284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.073319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.073552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.073581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.073927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.073955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.074295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.074324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.074671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.074699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.075054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.075082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.075372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.075401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.075625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.075652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.075995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.076022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.076326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.076355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.076601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.076630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.077009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.077036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.077373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.077401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.077744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.077772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.078008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.078037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.078284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.078312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.078521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.078548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.078914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.078942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.079291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.079320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.079683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.079712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.080056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.080084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.080336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.080369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.080599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.080627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.080888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.080917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.081266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.081295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.081677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.081704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.082053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.082081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.029 qpair failed and we were unable to recover it. 00:40:07.029 [2024-11-28 13:10:37.082427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.029 [2024-11-28 13:10:37.082456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.082807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.082835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.083196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.083225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.083555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.083584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.083808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.083836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.084179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.084208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.084554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.084583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.084921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.084948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.085157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.085193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.085293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.085319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.085670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.085698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.085923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.085951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.086262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.086291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.086526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.086554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.086898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.086932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.087286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.087315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.087668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.087695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.087914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.087942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.088289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.088317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.088684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.088711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.089049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.089077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.089430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.089460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.089810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.089838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.090196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.090224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.090543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.090571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.090765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.090792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.091006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.091037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.091347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.091376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.091607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.091635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.092003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.092031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.030 qpair failed and we were unable to recover it. 00:40:07.030 [2024-11-28 13:10:37.092369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.030 [2024-11-28 13:10:37.092398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.092675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.093024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.093052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.093367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.093396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.093739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.093767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.093976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.094004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.094231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.094261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.094668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.094696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.095091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.095119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.095504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.095535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.095861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.095889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.096250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.096286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.096523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.096551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.096891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.096918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.097237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.097266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.097480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.097508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.097892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.098284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.098629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.098656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.098881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.098908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.099248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.099277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.099487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.099515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.099856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.099884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.100229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.100258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.100576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.100604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.100965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.100993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.101355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.101384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.101607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.101638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.031 qpair failed and we were unable to recover it. 00:40:07.031 [2024-11-28 13:10:37.101963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.031 [2024-11-28 13:10:37.101991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.102362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.102391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.102595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.102623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.102988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.103015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.103322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.103350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.103585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.103614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.103934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.103961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.104173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.104203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.104527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.104555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.104904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.104932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.105194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.105224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.105554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.105582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.105943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.105971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.106327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.106356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.106720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.106747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.107100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.107127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.107469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.107498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.107848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.107876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.108230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.108258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.108464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.108491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.032 [2024-11-28 13:10:37.108812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.032 [2024-11-28 13:10:37.108840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.032 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.109192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.109223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.109538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.109566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.109920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.109947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.110057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.110094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.110433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.110462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.110664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.110692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.111039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.111066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.111387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.111415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.111608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.111636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.111961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.111989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.112207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.112236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.112579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.112607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.112919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.112946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.113315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.113344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.113704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.314 qpair failed and we were unable to recover it. 00:40:07.314 [2024-11-28 13:10:37.114084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.314 [2024-11-28 13:10:37.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.114417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.114446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.114798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.114826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.115182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.115211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.115550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.115577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.115805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.115836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.116187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.116216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.116445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.116672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.116699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.116947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.116975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.117344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.117373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.117716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.117743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.118094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.118121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.118518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.118547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.118890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.118918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.119281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.119324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.119683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.119710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.120055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.120083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.120296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.120325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.120554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.120581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.120797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.120828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.121086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.121114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.121368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.121397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.121767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.121795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.122141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.122175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.122519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.122548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.122902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.122929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.123285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.123314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.123662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.315 [2024-11-28 13:10:37.123689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.315 qpair failed and we were unable to recover it. 00:40:07.315 [2024-11-28 13:10:37.123940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.123969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.124216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.124249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.124622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.124650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.124844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.124871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.125104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.125130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.125377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.125406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.125616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.125644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.125946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.125973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.126417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.126446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.126786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.126813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.127047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.127075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.127430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.127460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.127820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.127847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.128095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.128126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.128237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.128265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.128569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.128597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.128944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.128972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.129341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.129555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.129582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.129942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.129969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.130313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.130342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.130692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.130720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.131070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.131096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.131321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.131351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.131698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.131726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.131974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.132005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.132333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.132361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.132553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.132587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.132947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.316 [2024-11-28 13:10:37.132975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.316 qpair failed and we were unable to recover it. 00:40:07.316 [2024-11-28 13:10:37.133308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.133337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.133683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.133711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.134064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.134093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.134451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.134479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.134812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.134841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.135187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.135219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.135427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.135456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.135875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.135902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.136114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.136142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.136379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.136407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.136611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.136639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.136956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.136983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.137204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.137234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.137542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.137569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.137892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.137919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.138254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.138282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.138517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.138842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.138870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.139070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.139097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.139451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.139482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.139834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.139861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.140210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.140238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.140389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.140719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.140747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.141099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.141126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.141501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.141530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.141738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.141766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.142108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.142137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.142458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.142487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.317 [2024-11-28 13:10:37.142837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.317 [2024-11-28 13:10:37.142865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.317 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.143211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.143240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.143435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.143464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.143687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.143715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.144052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.144080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.144449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.144478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.144701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.144728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.144936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.144964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.145335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.145364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.145695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.145722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.146068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.146097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.146439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.146467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.146812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.146840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.147047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.147075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.147357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.147386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.147712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.147739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.148005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.148033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.148122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.148150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.148418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.148446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.148798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.148826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.149188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.149218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.149543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.149571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.149805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.149834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.150081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.150113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.150479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.150510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.150867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.150894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.151245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.151275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.151634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.151662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.318 qpair failed and we were unable to recover it. 00:40:07.318 [2024-11-28 13:10:37.152013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.318 [2024-11-28 13:10:37.152040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.152402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.152431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.152767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.152794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.153143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.153180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.153463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.153491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.153840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.153867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.154215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.154244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.154483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.154511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.154822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.154849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.155210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.155246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.155604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.155632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.155878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.155910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.156236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.156265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.156650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.156677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.156870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.156898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.157242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.157270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.157615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.157642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.157993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.158021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.158387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.158416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.158510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.158537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.158972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.159078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.159672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.159765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.160204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.160245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.160468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.160499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.160857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.160886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.161103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.161131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.161542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.161575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.161894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.161921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.162115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.319 [2024-11-28 13:10:37.162142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.319 qpair failed and we were unable to recover it. 00:40:07.319 [2024-11-28 13:10:37.162392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.162420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.162790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.162817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.163181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.163211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.163537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.163565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.163922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.163949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.164175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.164204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.164424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.164456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.164763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.164797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.165147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.165184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.165508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.165536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.165884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.165912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.166275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.166304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.166662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.166690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.167039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.167066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.167408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.167438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.167670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.167698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.167966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.167994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.168323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.168351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.168697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.168725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.168931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.168958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.169293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.169321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.169673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.169701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.169955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.320 [2024-11-28 13:10:37.169982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.320 qpair failed and we were unable to recover it. 00:40:07.320 [2024-11-28 13:10:37.170210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.170552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.170580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.170893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.170920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.171030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.171061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.171389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.171419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.171757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.171785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.172136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.172171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.172371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.172399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.172611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.172638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.173005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.173032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.173360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.173390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.173715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.173743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.174091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.174119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.174371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.174400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.174741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.174768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.174973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.175001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.175385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.175416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.175776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.175804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.176166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.176195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.176517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.176546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.176791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.176821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.177152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.177188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.177392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.177421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.177649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.177676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.178008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.178036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.178270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.178305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.178613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.178641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.178968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.178995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.179314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.321 [2024-11-28 13:10:37.179343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.321 qpair failed and we were unable to recover it. 00:40:07.321 [2024-11-28 13:10:37.179710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.179739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.180096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.180123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.180473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.180506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.180881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.180910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.181264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.181293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.181649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.181677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.182009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.182037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.182387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.182417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.182658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.182686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.183055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.183082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.183293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.183323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.183656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.183684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.183904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.183931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.184277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.184306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.184616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.184644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.184988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.185016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.185392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.185421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.185637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.185669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.186022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.186049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.186244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.186273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.186469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.186496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.186905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.186933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.187293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.187322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.187676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.187710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.187913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.187940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.188301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.188330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.188701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.188730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.188961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.188988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.189302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.322 [2024-11-28 13:10:37.189330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.322 qpair failed and we were unable to recover it. 00:40:07.322 [2024-11-28 13:10:37.189648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.189676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.190000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.190029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.190394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.190424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.190755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.190783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.191092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.191120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.191525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.191555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.191910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.191938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.192182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.192215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.192459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.192488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.192794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.192824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.193102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.193130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.193540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.193569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.193903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.193931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.194025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.194052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.194554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.194648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.195041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.195077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.195537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.195630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.195913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.195949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.196279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.196309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.196587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.196616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.196966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.196995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.197357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.197386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.197697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.197726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.198075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.198105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.198495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.198525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.198823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.198851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.199079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.199106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.199352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.199382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.199779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.323 [2024-11-28 13:10:37.199806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.323 qpair failed and we were unable to recover it. 00:40:07.323 [2024-11-28 13:10:37.200169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.200200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.200430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.200457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.200801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.200828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.201199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.201229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.201556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.201583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.201961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.201989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.202367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.202662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.202690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.202921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.202950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.203198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.203228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.203477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.203504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.203859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.203887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.204238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.204268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.204586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.204615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.204994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.205022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.205356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.205384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.205615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.205644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.205990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.206018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.206368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.206397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.206756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.206791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.207118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.207147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.207512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.207540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.207895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.207924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.208099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.208132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.208511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.208541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.208897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.208925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.209273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.209303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.209669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.209697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.210048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.210076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.324 qpair failed and we were unable to recover it. 00:40:07.324 [2024-11-28 13:10:37.210315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.324 [2024-11-28 13:10:37.210344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.210703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.210731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.210991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.211018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.211259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.211288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.211547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.211576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.211923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.211951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.212298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.212688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.212715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.213068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.213095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.213432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.213461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.213824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.213852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.214204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.214233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.214581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.214609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.214920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.214948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.215312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.215341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.215694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.215722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.215981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.216378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.216410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.216619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.216648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.217009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.217037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.217379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.217409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.217674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.217701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.218063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.218091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.218427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.218455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.218712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.218740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.219088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.219115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.219488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.219517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.219863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.219892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.220109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.220467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.325 [2024-11-28 13:10:37.220496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.325 qpair failed and we were unable to recover it. 00:40:07.325 [2024-11-28 13:10:37.220804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.220839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.221207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.221442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.221469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.221722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.221750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.222094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.222121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.222531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.222560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.222933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.222961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.223392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.223421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.223776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.223804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.224157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.224205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.224552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.224580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.224947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.224975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.225360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.225390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.225641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.225673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.226005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.226033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.226256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.226285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.226637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.226664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.226890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.226917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.227277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.227307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.227647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.227674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.228018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.228045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.228382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.228412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.228740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.228768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.229115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.229142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.229350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.229378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.229613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.229640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.230035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.326 [2024-11-28 13:10:37.230062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.326 qpair failed and we were unable to recover it. 00:40:07.326 [2024-11-28 13:10:37.230324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.230354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.230701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.230730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.231079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.231106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.231353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.231382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.231727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.231756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.231981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.232009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.232328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.232357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.232717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.232744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.233097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.233125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.233359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.233389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.233732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.233760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.234119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.234147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.234501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.234530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.234895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.234929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.235286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.235316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.235538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.235566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.235936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.235964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.236303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.236332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.236702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.236730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.236958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.236986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.237322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.237350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.237709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.237736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.237990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.238017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.238380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.238410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.238665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.238694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.239036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.239064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.239425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.239455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.239661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.239689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.240073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.327 [2024-11-28 13:10:37.240102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.327 qpair failed and we were unable to recover it. 00:40:07.327 [2024-11-28 13:10:37.240433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.240463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.240728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.240756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.241080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.241108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.241521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.241766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.242115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.242143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.242377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.242406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.242744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.242771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.243134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.243169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.243512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.243539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.243894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.243921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.244135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.244535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.244564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.244924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.244953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.245291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.245320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.245679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.245707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.245980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.246007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.246393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.246421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.246768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.246796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.247154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.247191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.247525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.247553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.247912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.247939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.248124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.248152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.248509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.248537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.248783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.248822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.248979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.249010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.249326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.249357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.249727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.249754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.250111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.250139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.328 [2024-11-28 13:10:37.250378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.328 [2024-11-28 13:10:37.250407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.328 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.250750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.250778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.251099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.251127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.251439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.251469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.251843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.251870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.252170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.252199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.252538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.252566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.252915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.252943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.253304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.253691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.253720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.254076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.254104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.254474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.254505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.254722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.254751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.254963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.254991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.255348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.255693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.255721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.256076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.256104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.256464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.256493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.256899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.256928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.257131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.257168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.257502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.257530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.257851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.258128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.258156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.258502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.258529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.258896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.258924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.259126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.259154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.259513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.259542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.259862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.259890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.329 [2024-11-28 13:10:37.260286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.329 [2024-11-28 13:10:37.260315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.329 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.260677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.260704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.261061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.261089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.261451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.261481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.261829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.261858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.262212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.262241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.262567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.262595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.262953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.262993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.263338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.263367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.263738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.263766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.264062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.264091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.264449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.264478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.264838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.264866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.265208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.265236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.265611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.265639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.265857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.265885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.266105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.266134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.266524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.266553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.266920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.266948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.267309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.267339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.267671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.267699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.267914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.267943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.268280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.268308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.268675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.268704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.269040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.269069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.269402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.269432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.269794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.269822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.270197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.270227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.270583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.270612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.330 [2024-11-28 13:10:37.270968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.330 [2024-11-28 13:10:37.270998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.330 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.271263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.271292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.271657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.271686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.272038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.272067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.272419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.272448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.272681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.272710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.273070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.273098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.273377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.273406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.273562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.273589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.273934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.273963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.274318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.274347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.274706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.274734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.275108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.275137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.275486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.275515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.275740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.275767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.276085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.276114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.276532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.276561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.276919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.276947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.277345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.277380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.277714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.277742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.278106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.278134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.278521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.278550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.278873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.278901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.279239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.279268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.279633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.279661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.279772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.279804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.280168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.280197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.280582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.280610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.280970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.280998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.331 [2024-11-28 13:10:37.281369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.331 [2024-11-28 13:10:37.281400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.331 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.281677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.281706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.281931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.281959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.282328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.282358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.282734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.282763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.282987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.283014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.283380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.283410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.283758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.283786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.284138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.284175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.284520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.284548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.284701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.284728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.285146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.285511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.285540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.285880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.285907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.286269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.286298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.286571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.286948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.286977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.287354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.287383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.287759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.287787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.288111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.288139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.288480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.288509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.288876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.288904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.289125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.332 [2024-11-28 13:10:37.289153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.332 qpair failed and we were unable to recover it. 00:40:07.332 [2024-11-28 13:10:37.289506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.289534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.289897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.289924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.290154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.290192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.290553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.290581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.290675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.290701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.291197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.291297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.291705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.291753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.292128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.292178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.292590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.292686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.293112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.293149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.293593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.293692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.294028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.294061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.294425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.294454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.294655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.294684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.295046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.295075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.295416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.295446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.295779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.295807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.296186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.296215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.296448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.296477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.296811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.296839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.297222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.297252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.297496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.297524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.297859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.297888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.298260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.298289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.298629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.298657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.298895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.298924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.299291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.299319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.299567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.299595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.333 qpair failed and we were unable to recover it. 00:40:07.333 [2024-11-28 13:10:37.299936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.333 [2024-11-28 13:10:37.299964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.300338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.300367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.300728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.300756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.301117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.301145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.301546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.301574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.301785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.301822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.302188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.302219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.302595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.302624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.302969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.302997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.303211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.303241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.303465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.303494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.303825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.303854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.304211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.304240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.304629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.304656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.305002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.305030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.305253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.305282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.305628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.305656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.306000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.306028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.306381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.306410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.306681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.306710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.307075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.307103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.307466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.307495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.307713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.307741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.308118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.308147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.308489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.308518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.308884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.308912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.309261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.309290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.309645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.309672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.310036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.334 [2024-11-28 13:10:37.310064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.334 qpair failed and we were unable to recover it. 00:40:07.334 [2024-11-28 13:10:37.310399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.310428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.310771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.310799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.311195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.311225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.311463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.311493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.311843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.311871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.312240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.312270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.312633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.312661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.313019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.313047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.313255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.313284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.313529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.313560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.313901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.313929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.314300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.314329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.314694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.314722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.314942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.314970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.315330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.315360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.315731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.315759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.315973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.316008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.316370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.316399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.316667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.316695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.317089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.317117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.317485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.317514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.317868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.317896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.318258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.318288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.318521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.318549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.318894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.319273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.319302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.319555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.319583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.319804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.319832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.320225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.320254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.335 [2024-11-28 13:10:37.320566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.335 [2024-11-28 13:10:37.320594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.335 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.320948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.320976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.321258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.321287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.321403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.321435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa5fc000b90 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.321837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.321938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.322337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.322377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.322766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.322796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.323108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.323136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.323637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.323737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.324226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.324286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.336 [2024-11-28 13:10:37.324640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.324670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:40:07.336 [2024-11-28 13:10:37.324951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.324979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:07.336 [2024-11-28 13:10:37.325225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.325254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:07.336 [2024-11-28 13:10:37.325479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.325509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.336 [2024-11-28 13:10:37.325864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.325892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.326264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.326294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.326613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.326640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.326866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.326895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.327170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.327201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.327443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.327479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.327717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.327750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.328114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.328143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.328485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.328515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.328861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.328888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.329243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.329275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.329662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.329691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.330041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.330070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.330278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.330309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.330421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.330449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.330667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.336 [2024-11-28 13:10:37.330697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.336 qpair failed and we were unable to recover it. 00:40:07.336 [2024-11-28 13:10:37.331100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.331128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.331411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.331443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.331802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.331830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.332193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.332480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.332509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.332876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.332904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.333257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.333286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.333620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.333649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.333990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.334018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.334363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.334402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.334727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.334756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.335086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.335113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.335535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.335564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.335895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.335923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.336296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.336325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.336712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.336741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.337098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.337128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.337479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.337509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.337863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.337891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.338240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.338273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.338513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.338543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.338914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.339119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.339148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.339523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.339555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.339920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.339948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.340217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.340247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.340471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.340500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.340703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.340731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.341085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.341114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.341544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.337 [2024-11-28 13:10:37.341575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.337 qpair failed and we were unable to recover it. 00:40:07.337 [2024-11-28 13:10:37.341939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.341970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.342329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.342359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.342452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.342480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6de090 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.342939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.343031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.343518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.343619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.344056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.344093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.344585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.344698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.345086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.345124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.345527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.345559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.345919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.345948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.346394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.346494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.346927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.346963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.347318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.347351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.347728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.347757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.348117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.348151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.348535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.348944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.348973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.349322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.349353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.349722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.349750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.350172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.350201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.350459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.350493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.350852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.350881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.351198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.351229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.351586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.338 [2024-11-28 13:10:37.351616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.338 qpair failed and we were unable to recover it. 00:40:07.338 [2024-11-28 13:10:37.351869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.351897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.352248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.352279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.352536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.352565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.352906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.352935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.353302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.353331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.353705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.353735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.354066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.354095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.354458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.354488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.354823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.354850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.355218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.355250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.355636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.355664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.355894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.355923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.356209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.356239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.356617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.356646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.356909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.356937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.357309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.357339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.357700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.357729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.358174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.358203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.358579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.358608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.358995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.359024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.359362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.359391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.359642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.359671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.360102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.360208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.360237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.360492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.360525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.360735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.360763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.361014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.361041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.361417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.339 [2024-11-28 13:10:37.361448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.339 qpair failed and we were unable to recover it. 00:40:07.339 [2024-11-28 13:10:37.361797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.361827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.362204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.362235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.362476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.362504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.362872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.362900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.363144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.363183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.363412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.363441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.363838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.363866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.364229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.364260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.364650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.364679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.365040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.365068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.365409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.365439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.365817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.365846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.366201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.366229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.366432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.366776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.366804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.367172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.367202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:07.340 [2024-11-28 13:10:37.367523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.367554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.367865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.367894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.368103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.368131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.340 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.340 [2024-11-28 13:10:37.368562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.368595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.368925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.368953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.369306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.369336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.369697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.369725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.370104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.370131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.370479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.370508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.370859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.370887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.370988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.340 [2024-11-28 13:10:37.371015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.340 qpair failed and we were unable to recover it. 00:40:07.340 [2024-11-28 13:10:37.371426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.371803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.371830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.372143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.372180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.372523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.372552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.372777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.372805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.373156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.373203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.373586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.373613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.373985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.374013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.374382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.374412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.374773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.374801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.375179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.375208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.375572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.375600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.375935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.375964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.376343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.376373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.376607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.376639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.377009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.377037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.377396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.377425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.377679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.377707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.377939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.377968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.378345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.378375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.378739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.378767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.379136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.379171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.379532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.379561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.379931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.379959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.380303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.380332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.380774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.380801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.381171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.381200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.381556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.381584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.381947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.381975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.382315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.382344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.341 qpair failed and we were unable to recover it. 00:40:07.341 [2024-11-28 13:10:37.382720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.341 [2024-11-28 13:10:37.382747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.383128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.383156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.383552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.383582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.383948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.383977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.384220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.384250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.384630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.384658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.385027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.385055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.385410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.385440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.385661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.385689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.386045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.386073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.386418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.386446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.386844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.386872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.387226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.387256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.387553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.387581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.387908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.387936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.388300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.388336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.388700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.388729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.389097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.389125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.389496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.389526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.389902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.389929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.390295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.390324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.390673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.390702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.390915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.390944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.391304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.391334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.391702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.391730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.392100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.392128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.392405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.392435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.392796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.392823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.342 qpair failed and we were unable to recover it. 00:40:07.342 [2024-11-28 13:10:37.393059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.342 [2024-11-28 13:10:37.393087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.393457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.393487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.393748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.393776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.394154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.394191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.394555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.394954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.394981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.395212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.395245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.395600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.395628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.395972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.396000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.396254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.396283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.396671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.396954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 Malloc0 00:40:07.343 [2024-11-28 13:10:37.397330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.397361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.397739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.397768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.343 [2024-11-28 13:10:37.398129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.398168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:07.343 [2024-11-28 13:10:37.398529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.398557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.343 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.343 [2024-11-28 13:10:37.398915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.398942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.399303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.399333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.399704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.399731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.400004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.400032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.400385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.400414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.400680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.400707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.401070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.401098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.401336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.401365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.401662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.401689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.402067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.402094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.402449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.402479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.402878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.402905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.403284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.343 [2024-11-28 13:10:37.403314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.343 qpair failed and we were unable to recover it. 00:40:07.343 [2024-11-28 13:10:37.403675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.403702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.404081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.404108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.404369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.344 [2024-11-28 13:10:37.404522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.404551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.404907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.404935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.405191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.405220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.405605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.405632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.405866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.405895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.406276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.406305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.406669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.406696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.407067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.407094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.407451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.407481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.407582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.407609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.407875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.407902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.408273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.408302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.408540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.408567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.408945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.408974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.409353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.409382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.409570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.409602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.409844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.409872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.410236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.410265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.410644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.410671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.411049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.411076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.411448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.411477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.411789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.411820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.412208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.412239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.412605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.412634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.412778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.412807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 [2024-11-28 13:10:37.413189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.344 [2024-11-28 13:10:37.413218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.344 qpair failed and we were unable to recover it. 00:40:07.344 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.344 [2024-11-28 13:10:37.413541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.413572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.413791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.413820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:07.345 [2024-11-28 13:10:37.413985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.414011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 [2024-11-28 13:10:37.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.414292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 [2024-11-28 13:10:37.414623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.414652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.414926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.414953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.415319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.415348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.415606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.415638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.415991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.416019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.416270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.416300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.416678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.416706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.417055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.417083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.417475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.417504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.417737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.417764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.418137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.418173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.418403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.418430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.418789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.418815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.419192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.419221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.419566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.419593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.345 qpair failed and we were unable to recover it. 00:40:07.345 [2024-11-28 13:10:37.419958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.345 [2024-11-28 13:10:37.419986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.420343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.420374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.420742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.420770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.421144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.421179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.421429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.421457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.421811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.421838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.422203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.422232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.422620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.610 [2024-11-28 13:10:37.422647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.610 qpair failed and we were unable to recover it. 00:40:07.610 [2024-11-28 13:10:37.423024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.423054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.423422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.423451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.423827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.423855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.424117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.424145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.424568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.424597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.424862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.424890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.425119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.425153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.611 [2024-11-28 13:10:37.425545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.425574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:07.611 [2024-11-28 13:10:37.425949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.425977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.611 [2024-11-28 13:10:37.426332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.426361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.611 [2024-11-28 13:10:37.426625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.426652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.426961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.426988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.427363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.427393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.427764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.427793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.428018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.428046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.428190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.428220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.428564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.428591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.428811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.428838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.429204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.429233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.429614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.429641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.429796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.429823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.430246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.430276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.430518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.430546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.430903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.430931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.431269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.431298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.431576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.431602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.611 [2024-11-28 13:10:37.431960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.611 [2024-11-28 13:10:37.431987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.611 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.432318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.432347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.432557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.432586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.432972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.432999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.433342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.433371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.433476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.433505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa600000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.433871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.433977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.434432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.434536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.434950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.434985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.435440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.435533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.435923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.435959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.436409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.436503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.437027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.437449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b9 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.612 0 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.437721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.437754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:07.612 [2024-11-28 13:10:37.438107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.438136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.612 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.612 [2024-11-28 13:10:37.438506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.438557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.438682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.438709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.438979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.439011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.439416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.439447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.439654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.439682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.440046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.440074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.440406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.440435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.440792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.440819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.441184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.441235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.441619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.441647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.441910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.441937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.612 qpair failed and we were unable to recover it. 00:40:07.612 [2024-11-28 13:10:37.442145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.612 [2024-11-28 13:10:37.442182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.442547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.613 [2024-11-28 13:10:37.442576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.442949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.613 [2024-11-28 13:10:37.442978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.443243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.613 [2024-11-28 13:10:37.443273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.443515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.613 [2024-11-28 13:10:37.443543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.443912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.613 [2024-11-28 13:10:37.443940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.444201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.613 [2024-11-28 13:10:37.444231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa608000b90 with addr=10.0.0.2, port=4420 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.444557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.613 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.613 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:07.613 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.613 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.613 [2024-11-28 13:10:37.455273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.455384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.455423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.613 [2024-11-28 13:10:37.455442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.613 [2024-11-28 13:10:37.455460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.613 [2024-11-28 13:10:37.455507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.613 13:10:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3680616 00:40:07.613 [2024-11-28 13:10:37.465165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.465240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.465266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.613 [2024-11-28 13:10:37.465279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.613 [2024-11-28 13:10:37.465294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.613 [2024-11-28 13:10:37.465324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.474995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.475061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.475080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.613 [2024-11-28 13:10:37.475089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.613 [2024-11-28 13:10:37.475097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.613 [2024-11-28 13:10:37.475117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.485120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.485191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.485206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.613 [2024-11-28 13:10:37.485213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.613 [2024-11-28 13:10:37.485221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.613 [2024-11-28 13:10:37.485238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.495059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.495123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.495137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.613 [2024-11-28 13:10:37.495143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.613 [2024-11-28 13:10:37.495149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.613 [2024-11-28 13:10:37.495168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.505017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.505074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.505087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.613 [2024-11-28 13:10:37.505094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.613 [2024-11-28 13:10:37.505100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.613 [2024-11-28 13:10:37.505114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.613 qpair failed and we were unable to recover it. 00:40:07.613 [2024-11-28 13:10:37.515031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.613 [2024-11-28 13:10:37.515084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.613 [2024-11-28 13:10:37.515101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.515108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.515114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.515128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.525030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.525097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.525110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.525117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.525123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.525137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.535053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.535123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.535137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.535143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.535149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.535170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.545040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.545112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.545125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.545132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.545138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.545152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.555070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.555132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.555145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.555151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.555165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.555179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.565055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.565107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.565121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.565127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.565133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.565148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.574962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.575020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.575035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.575042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.575048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.575063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.585086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.585178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.585192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.585199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.585205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.585220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.595063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.595114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.595126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.595133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.595139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.595156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.604971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.605068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.605081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.605087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.605093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.614 [2024-11-28 13:10:37.605107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.614 qpair failed and we were unable to recover it. 00:40:07.614 [2024-11-28 13:10:37.615070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.614 [2024-11-28 13:10:37.615128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.614 [2024-11-28 13:10:37.615141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.614 [2024-11-28 13:10:37.615148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.614 [2024-11-28 13:10:37.615154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.615173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.625053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.625113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.625126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.625132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.625138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.625152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.635085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.635144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.635161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.635168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.635174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.635188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.645082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.645135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.645150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.645157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.645167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.645181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.655118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.655178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.655191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.655197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.655204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.655218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.665108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.665166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.665179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.665186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.665192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.665207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.675119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.675195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.675208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.675215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.675221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.675235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.685113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.685174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.685187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.685197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.685203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.685217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.695137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.695199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.695212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.695219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.695225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.695239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.705134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.705225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.705238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.705244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.705250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.705264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.715092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.615 [2024-11-28 13:10:37.715180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.615 [2024-11-28 13:10:37.715193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.615 [2024-11-28 13:10:37.715200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.615 [2024-11-28 13:10:37.715206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.615 [2024-11-28 13:10:37.715219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.615 qpair failed and we were unable to recover it. 00:40:07.615 [2024-11-28 13:10:37.725138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.616 [2024-11-28 13:10:37.725215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.616 [2024-11-28 13:10:37.725228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.616 [2024-11-28 13:10:37.725235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.616 [2024-11-28 13:10:37.725241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.616 [2024-11-28 13:10:37.725259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.616 qpair failed and we were unable to recover it. 00:40:07.879 [2024-11-28 13:10:37.735144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.879 [2024-11-28 13:10:37.735206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.879 [2024-11-28 13:10:37.735220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.879 [2024-11-28 13:10:37.735227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.879 [2024-11-28 13:10:37.735233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.879 [2024-11-28 13:10:37.735247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.879 qpair failed and we were unable to recover it. 00:40:07.879 [2024-11-28 13:10:37.745142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.879 [2024-11-28 13:10:37.745233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.879 [2024-11-28 13:10:37.745246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.879 [2024-11-28 13:10:37.745252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.879 [2024-11-28 13:10:37.745258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.879 [2024-11-28 13:10:37.745273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.879 qpair failed and we were unable to recover it. 00:40:07.879 [2024-11-28 13:10:37.755137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.879 [2024-11-28 13:10:37.755205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.879 [2024-11-28 13:10:37.755218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.879 [2024-11-28 13:10:37.755225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.879 [2024-11-28 13:10:37.755231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.879 [2024-11-28 13:10:37.755244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.879 qpair failed and we were unable to recover it. 00:40:07.879 [2024-11-28 13:10:37.765153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.879 [2024-11-28 13:10:37.765219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.879 [2024-11-28 13:10:37.765231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.879 [2024-11-28 13:10:37.765238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.879 [2024-11-28 13:10:37.765244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.879 [2024-11-28 13:10:37.765258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.879 qpair failed and we were unable to recover it. 00:40:07.879 [2024-11-28 13:10:37.775137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.879 [2024-11-28 13:10:37.775207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.879 [2024-11-28 13:10:37.775220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.775227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.775233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.775247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.785135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.785189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.785201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.785208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.785214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.785228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.795169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.795232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.795246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.795253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.795259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.795273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.805181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.805289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.805302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.805309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.805315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.805329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.815179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.815233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.815246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.815255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.815262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.815276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.825181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.825231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.825245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.825251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.825257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.825271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.835187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.835239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.835252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.835259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.835265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.835279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.845175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.845232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.845245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.845252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.845258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.845272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.855196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.855253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.855266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.855272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.855279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.855296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.865178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.865225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.865238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.865244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.880 [2024-11-28 13:10:37.865250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.880 [2024-11-28 13:10:37.865264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.880 qpair failed and we were unable to recover it. 00:40:07.880 [2024-11-28 13:10:37.875156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.880 [2024-11-28 13:10:37.875254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.880 [2024-11-28 13:10:37.875267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.880 [2024-11-28 13:10:37.875273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.875280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.875294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.885181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.885269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.885282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.885289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.885295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.885310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.895204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.895262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.895275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.895281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.895287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.895301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.905217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.905270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.905283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.905289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.905296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.905310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.915079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.915133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.915147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.915154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.915164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.915185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.925217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.925271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.925284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.925291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.925297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.925311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.935228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.935282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.935294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.935301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.935307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.935321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.945185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.945242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.945258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.945264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.945270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.945285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.955213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.955278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.955291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.955297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.955303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.955317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.965238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.965293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.965306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.881 [2024-11-28 13:10:37.965312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.881 [2024-11-28 13:10:37.965319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.881 [2024-11-28 13:10:37.965333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.881 qpair failed and we were unable to recover it. 00:40:07.881 [2024-11-28 13:10:37.975213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.881 [2024-11-28 13:10:37.975270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.881 [2024-11-28 13:10:37.975283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.882 [2024-11-28 13:10:37.975289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.882 [2024-11-28 13:10:37.975295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.882 [2024-11-28 13:10:37.975309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.882 qpair failed and we were unable to recover it. 00:40:07.882 [2024-11-28 13:10:37.985236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.882 [2024-11-28 13:10:37.985289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.882 [2024-11-28 13:10:37.985303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.882 [2024-11-28 13:10:37.985309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.882 [2024-11-28 13:10:37.985319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.882 [2024-11-28 13:10:37.985339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.882 qpair failed and we were unable to recover it. 00:40:07.882 [2024-11-28 13:10:37.995236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.882 [2024-11-28 13:10:37.995333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.882 [2024-11-28 13:10:37.995347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.882 [2024-11-28 13:10:37.995353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.882 [2024-11-28 13:10:37.995359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:07.882 [2024-11-28 13:10:37.995373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:07.882 qpair failed and we were unable to recover it. 00:40:08.145 [2024-11-28 13:10:38.005244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.145 [2024-11-28 13:10:38.005302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.145 [2024-11-28 13:10:38.005315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.145 [2024-11-28 13:10:38.005321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.145 [2024-11-28 13:10:38.005328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.145 [2024-11-28 13:10:38.005342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.145 qpair failed and we were unable to recover it. 00:40:08.145 [2024-11-28 13:10:38.015271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.145 [2024-11-28 13:10:38.015328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.145 [2024-11-28 13:10:38.015340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.145 [2024-11-28 13:10:38.015347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.145 [2024-11-28 13:10:38.015353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.145 [2024-11-28 13:10:38.015367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.145 qpair failed and we were unable to recover it. 00:40:08.145 [2024-11-28 13:10:38.025269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.145 [2024-11-28 13:10:38.025326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.145 [2024-11-28 13:10:38.025338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.145 [2024-11-28 13:10:38.025345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.145 [2024-11-28 13:10:38.025351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.145 [2024-11-28 13:10:38.025365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.145 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.035243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.035292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.035305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.035312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.035318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.035332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.045273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.045328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.045340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.045347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.045353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.045367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.055291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.055346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.055358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.055365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.055371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.055385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.065273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.065328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.065341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.065348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.065354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.065368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.075165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.075218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.075238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.075245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.075251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.075265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.085285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.085342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.085354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.085361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.085367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.085381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.095239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.095328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.095341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.095347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.095354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.095367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.105298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.105356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.105369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.105375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.105381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.105396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.115313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.115390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.115403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.115410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.115419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.115433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.146 qpair failed and we were unable to recover it. 00:40:08.146 [2024-11-28 13:10:38.125337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.146 [2024-11-28 13:10:38.125406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.146 [2024-11-28 13:10:38.125418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.146 [2024-11-28 13:10:38.125425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.146 [2024-11-28 13:10:38.125431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.146 [2024-11-28 13:10:38.125445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.135345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.135446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.135459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.135465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.135471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.135485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.145316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.145398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.145411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.145417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.145423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.145437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.155367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.155449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.155462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.155469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.155475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.155488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.165305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.165365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.165378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.165384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.165390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.165404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.175254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.175310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.175323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.175329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.175335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.175349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.185349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.185403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.185416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.185422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.185428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.185442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.195347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.195429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.195442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.195448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.195454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.195469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.205384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.205441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.205458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.205465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.205470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.205484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.215387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.215438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.215450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.215457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.215463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.215477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.147 [2024-11-28 13:10:38.225384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.147 [2024-11-28 13:10:38.225439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.147 [2024-11-28 13:10:38.225452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.147 [2024-11-28 13:10:38.225459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.147 [2024-11-28 13:10:38.225465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.147 [2024-11-28 13:10:38.225479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.147 qpair failed and we were unable to recover it. 00:40:08.148 [2024-11-28 13:10:38.235353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.148 [2024-11-28 13:10:38.235407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.148 [2024-11-28 13:10:38.235420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.148 [2024-11-28 13:10:38.235426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.148 [2024-11-28 13:10:38.235433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.148 [2024-11-28 13:10:38.235446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.148 qpair failed and we were unable to recover it. 00:40:08.148 [2024-11-28 13:10:38.245378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.148 [2024-11-28 13:10:38.245432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.148 [2024-11-28 13:10:38.245445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.148 [2024-11-28 13:10:38.245454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.148 [2024-11-28 13:10:38.245461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.148 [2024-11-28 13:10:38.245475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.148 qpair failed and we were unable to recover it. 00:40:08.148 [2024-11-28 13:10:38.255300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.148 [2024-11-28 13:10:38.255355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.148 [2024-11-28 13:10:38.255367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.148 [2024-11-28 13:10:38.255374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.148 [2024-11-28 13:10:38.255380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.148 [2024-11-28 13:10:38.255394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.148 qpair failed and we were unable to recover it. 00:40:08.148 [2024-11-28 13:10:38.265401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.148 [2024-11-28 13:10:38.265456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.148 [2024-11-28 13:10:38.265469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.148 [2024-11-28 13:10:38.265476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.148 [2024-11-28 13:10:38.265482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.148 [2024-11-28 13:10:38.265496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.148 qpair failed and we were unable to recover it. 00:40:08.410 [2024-11-28 13:10:38.275383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.410 [2024-11-28 13:10:38.275454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.410 [2024-11-28 13:10:38.275467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.410 [2024-11-28 13:10:38.275474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.410 [2024-11-28 13:10:38.275480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.410 [2024-11-28 13:10:38.275493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.410 qpair failed and we were unable to recover it. 00:40:08.410 [2024-11-28 13:10:38.285413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.410 [2024-11-28 13:10:38.285466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.410 [2024-11-28 13:10:38.285478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.410 [2024-11-28 13:10:38.285485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.410 [2024-11-28 13:10:38.285491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.410 [2024-11-28 13:10:38.285508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.410 qpair failed and we were unable to recover it. 00:40:08.410 [2024-11-28 13:10:38.295416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.410 [2024-11-28 13:10:38.295472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.410 [2024-11-28 13:10:38.295484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.410 [2024-11-28 13:10:38.295491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.410 [2024-11-28 13:10:38.295497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.410 [2024-11-28 13:10:38.295511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.410 qpair failed and we were unable to recover it. 00:40:08.410 [2024-11-28 13:10:38.305409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.305464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.305477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.305483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.305489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.305503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.315412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.315465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.315478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.315485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.315491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.315505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.325412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.325471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.325483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.325490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.325496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.325510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.335481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.335553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.335565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.335572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.335578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.335592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.345405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.345457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.345470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.345476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.345482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.345496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.355403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.355454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.355466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.355473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.355479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.355492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.365440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.365504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.365517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.365524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.365530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.365544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.375454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.375545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.375557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.375566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.375572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.375586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.385469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.385520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.385533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.385539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.385545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.385558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.395452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.395528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.395541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.395548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.395554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.395567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.405457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.405518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.405530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.405536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.405543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.411 [2024-11-28 13:10:38.405556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.411 qpair failed and we were unable to recover it. 00:40:08.411 [2024-11-28 13:10:38.415483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.411 [2024-11-28 13:10:38.415538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.411 [2024-11-28 13:10:38.415551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.411 [2024-11-28 13:10:38.415557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.411 [2024-11-28 13:10:38.415563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.415580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.425482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.425535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.425548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.425555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.425561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.425575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.435492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.435579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.435591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.435598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.435604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.435617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.445492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.445542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.445555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.445561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.445567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.445581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.455525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.455578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.455590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.455597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.455603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.455616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.465389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.465443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.465458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.465464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.465470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.465485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.475509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.475565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.475578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.475585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.475591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.475605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.485509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.485566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.485578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.485585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.485591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.485604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.495552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.495626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.495640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.495646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.495652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.495667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.505512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.505563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.505579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.505585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.505592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.505606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.515392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.515449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.515461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.515467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.515473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.515487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.412 [2024-11-28 13:10:38.525528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.412 [2024-11-28 13:10:38.525579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.412 [2024-11-28 13:10:38.525593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.412 [2024-11-28 13:10:38.525599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.412 [2024-11-28 13:10:38.525605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.412 [2024-11-28 13:10:38.525620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.412 qpair failed and we were unable to recover it. 00:40:08.674 [2024-11-28 13:10:38.535555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.535659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.535672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.535679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.535685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.535699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.545535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.545619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.545632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.545639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.545648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.545662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.555543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.555599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.555612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.555618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.555624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.555638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.565441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.565498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.565511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.565517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.565523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.565537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.575579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.575634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.575647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.575653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.575659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.575673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.585576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.585629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.585642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.585648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.585654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.585668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.595562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.595611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.595624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.595630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.595636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.595650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.605580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.605633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.605645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.605652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.605658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.605671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.615608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.615711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.615724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.615731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.615737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.615750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.625583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.625684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.625697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.625704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.625710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.625723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.635591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.635643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.635658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.635665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.635671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.635685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.645629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.645698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.645710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.645717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.645723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.645736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.655478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.675 [2024-11-28 13:10:38.655534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.675 [2024-11-28 13:10:38.655546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.675 [2024-11-28 13:10:38.655552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.675 [2024-11-28 13:10:38.655558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.675 [2024-11-28 13:10:38.655572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.675 qpair failed and we were unable to recover it. 00:40:08.675 [2024-11-28 13:10:38.665603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.665649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.665661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.665668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.665674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.665688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.675609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.675654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.675667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.675673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.675682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.675696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.685632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.685689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.685701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.685708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.685714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.685728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.695635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.695694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.695706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.695713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.695719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.695733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.705677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.705744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.705756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.705763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.705769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.705782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.715620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.715719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.715731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.715737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.715743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.715757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.725512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.725563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.725577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.725584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.725590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.725609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.735654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.735710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.735723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.735730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.735736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.735750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.745635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.745724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.745737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.745744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.745750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.745764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.755638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.755688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.755700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.755707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.755713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.755726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.765672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.765731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.765744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.765751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.765757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.765771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.775699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.775783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.775796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.775802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.775808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.775822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.785670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.676 [2024-11-28 13:10:38.785718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.676 [2024-11-28 13:10:38.785734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.676 [2024-11-28 13:10:38.785741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.676 [2024-11-28 13:10:38.785747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.676 [2024-11-28 13:10:38.785762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.676 qpair failed and we were unable to recover it. 00:40:08.676 [2024-11-28 13:10:38.795687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.677 [2024-11-28 13:10:38.795741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.677 [2024-11-28 13:10:38.795756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.677 [2024-11-28 13:10:38.795763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.677 [2024-11-28 13:10:38.795769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.677 [2024-11-28 13:10:38.795789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.677 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.805689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.805743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.805757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.805769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.805776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.805790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.815721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.815781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.815794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.815800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.815807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.815821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.825670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.825739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.825752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.825759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.825765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.825779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.835587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.835648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.835661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.835667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.835674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.835688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.845715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.845777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.845790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.845797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.845803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.845822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.855741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.855805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.855818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.855825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.855831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.855846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.865719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.865806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.865820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.865826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.865832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.865847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.875714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.875765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.875779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.939 [2024-11-28 13:10:38.875786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.939 [2024-11-28 13:10:38.875792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.939 [2024-11-28 13:10:38.875806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.939 qpair failed and we were unable to recover it. 00:40:08.939 [2024-11-28 13:10:38.885791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.939 [2024-11-28 13:10:38.885854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.939 [2024-11-28 13:10:38.885868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.885875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.885881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.885896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.895760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.895830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.895859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.895867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.895874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.895895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.905768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.905870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.905900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.905909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.905916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.905937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.915718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.915783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.915813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.915822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.915829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.915850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.925777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.925843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.925868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.925876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.925882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.925901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.935815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.935880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.935898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.935911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.935917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.935935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.945806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.945867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.945884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.945891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.945897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.945914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.955820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.955895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.955912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.955919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.955925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.955942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.965833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.965897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.965915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.965921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.965928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.965945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.975842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.975913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.975930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.975937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.975943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.975965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.985821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.985921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.985937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.985944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.985951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.985968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:38.995837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:38.995898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:38.995914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:38.995921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:38.995927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:38.995944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:39.005911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:39.006011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:39.006027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:39.006034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:39.006041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.940 [2024-11-28 13:10:39.006057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.940 qpair failed and we were unable to recover it. 00:40:08.940 [2024-11-28 13:10:39.015822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.940 [2024-11-28 13:10:39.015895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.940 [2024-11-28 13:10:39.015914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.940 [2024-11-28 13:10:39.015922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.940 [2024-11-28 13:10:39.015928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.941 [2024-11-28 13:10:39.015951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.941 qpair failed and we were unable to recover it. 00:40:08.941 [2024-11-28 13:10:39.025871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.941 [2024-11-28 13:10:39.025928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.941 [2024-11-28 13:10:39.025945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.941 [2024-11-28 13:10:39.025952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.941 [2024-11-28 13:10:39.025958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.941 [2024-11-28 13:10:39.025974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.941 qpair failed and we were unable to recover it. 00:40:08.941 [2024-11-28 13:10:39.035892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.941 [2024-11-28 13:10:39.035954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.941 [2024-11-28 13:10:39.035970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.941 [2024-11-28 13:10:39.035977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.941 [2024-11-28 13:10:39.035983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.941 [2024-11-28 13:10:39.036000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.941 qpair failed and we were unable to recover it. 00:40:08.941 [2024-11-28 13:10:39.045913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.941 [2024-11-28 13:10:39.045973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.941 [2024-11-28 13:10:39.045988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.941 [2024-11-28 13:10:39.045995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.941 [2024-11-28 13:10:39.046001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.941 [2024-11-28 13:10:39.046018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.941 qpair failed and we were unable to recover it. 00:40:08.941 [2024-11-28 13:10:39.055939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.941 [2024-11-28 13:10:39.056017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.941 [2024-11-28 13:10:39.056033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.941 [2024-11-28 13:10:39.056040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.941 [2024-11-28 13:10:39.056046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:08.941 [2024-11-28 13:10:39.056062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:08.941 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.065868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.065929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.065951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.065958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.065965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.065981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.075904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.075967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.075983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.075990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.075996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.076012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.085924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.086020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.086036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.086043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.086049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.086065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.095966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.096030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.096045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.096052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.096058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.096074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.105947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.106040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.106057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.106063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.106075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.106091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.115963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.116027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.116043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.116050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.116057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.116073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.125949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.126010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.126025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.126032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.126038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.126054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.136004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.136097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.136114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.136121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.136127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.136143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.145975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.146040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.146056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.204 [2024-11-28 13:10:39.146063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.204 [2024-11-28 13:10:39.146069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.204 [2024-11-28 13:10:39.146086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.204 qpair failed and we were unable to recover it. 00:40:09.204 [2024-11-28 13:10:39.155962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.204 [2024-11-28 13:10:39.156060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.204 [2024-11-28 13:10:39.156076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.156083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.156089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.156105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.165979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.166049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.166065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.166072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.166078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.166095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.176013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.176085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.176101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.176108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.176114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.176130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.185965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.186029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.186044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.186051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.186058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.186074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.196004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.196071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.196092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.196099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.196105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.196121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.206049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.206114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.206131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.206138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.206144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.206166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.216009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.216076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.216091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.216098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.216104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.216120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.226005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.226070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.226087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.226094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.226100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.226116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.235991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.236042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.236058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.236065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.236076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.236092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.246038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.246103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.205 [2024-11-28 13:10:39.246119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.205 [2024-11-28 13:10:39.246126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.205 [2024-11-28 13:10:39.246133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.205 [2024-11-28 13:10:39.246149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.205 qpair failed and we were unable to recover it. 00:40:09.205 [2024-11-28 13:10:39.256070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.205 [2024-11-28 13:10:39.256184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.256200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.256208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.256215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.256231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.266026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.266079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.266095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.266102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.266108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.266124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.275973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.276026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.276041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.276048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.276054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.276069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.286031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.286096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.286113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.286120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.286127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.286144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.295951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.296016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.296033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.296039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.296046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.296068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.306029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.306091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.306106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.306113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.306119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.306135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.315974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.316030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.316044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.316051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.316057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.316072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.206 [2024-11-28 13:10:39.326069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.206 [2024-11-28 13:10:39.326147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.206 [2024-11-28 13:10:39.326166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.206 [2024-11-28 13:10:39.326173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.206 [2024-11-28 13:10:39.326179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.206 [2024-11-28 13:10:39.326194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.206 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.336063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.336127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.336140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.336147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.336154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.336174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.346052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.346148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.346166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.346174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.346180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.346195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.355974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.356024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.356038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.356045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.356051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.356066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.366047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.366165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.366180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.366190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.366197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.366212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.376065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.376128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.376141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.376148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.376154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.376174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.386051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.386107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.386120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.386127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.386133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.386147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.395980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.396025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.396038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.396045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.396051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.396065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.406085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.406144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.406157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.406168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.406174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.406192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.415958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.469 [2024-11-28 13:10:39.416015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.469 [2024-11-28 13:10:39.416027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.469 [2024-11-28 13:10:39.416034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.469 [2024-11-28 13:10:39.416040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.469 [2024-11-28 13:10:39.416054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.469 qpair failed and we were unable to recover it. 00:40:09.469 [2024-11-28 13:10:39.426056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.426106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.426119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.426125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.426131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.426145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.435989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.436036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.436049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.436056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.436062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.436076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.446048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.446103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.446115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.446122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.446128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.446142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.456105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.456169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.456182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.456188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.456194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.456208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.466094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.466143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.466156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.466166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.466173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.466187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.476054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.476139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.476151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.476163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.476170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.476184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.486051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.486101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.486117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.486124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.486130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.486143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.496065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.496162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.496183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.496190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.496196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.496210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.506094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.506150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.506166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.506173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.506179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.506194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.516062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.516154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.516170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.516177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.516183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.516197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.526108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.526165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.526178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.526185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.526191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.526205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.536071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.536167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.536180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.536187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.536193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.536210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.546103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.546183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.470 [2024-11-28 13:10:39.546196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.470 [2024-11-28 13:10:39.546202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.470 [2024-11-28 13:10:39.546209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.470 [2024-11-28 13:10:39.546223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.470 qpair failed and we were unable to recover it. 00:40:09.470 [2024-11-28 13:10:39.556066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.470 [2024-11-28 13:10:39.556111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.471 [2024-11-28 13:10:39.556123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.471 [2024-11-28 13:10:39.556129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.471 [2024-11-28 13:10:39.556136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.471 [2024-11-28 13:10:39.556149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.471 qpair failed and we were unable to recover it. 00:40:09.471 [2024-11-28 13:10:39.565989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.471 [2024-11-28 13:10:39.566049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.471 [2024-11-28 13:10:39.566062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.471 [2024-11-28 13:10:39.566068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.471 [2024-11-28 13:10:39.566074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.471 [2024-11-28 13:10:39.566088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.471 qpair failed and we were unable to recover it. 00:40:09.471 [2024-11-28 13:10:39.576077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.471 [2024-11-28 13:10:39.576164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.471 [2024-11-28 13:10:39.576177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.471 [2024-11-28 13:10:39.576183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.471 [2024-11-28 13:10:39.576189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.471 [2024-11-28 13:10:39.576203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.471 qpair failed and we were unable to recover it. 00:40:09.471 [2024-11-28 13:10:39.586131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.471 [2024-11-28 13:10:39.586192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.471 [2024-11-28 13:10:39.586205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.471 [2024-11-28 13:10:39.586211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.471 [2024-11-28 13:10:39.586218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.471 [2024-11-28 13:10:39.586231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.471 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.596088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.596134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.596147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.596154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.596163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.596178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.606131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.606195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.606208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.606214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.606220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.606234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.616065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.616116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.616129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.616136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.616142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.616156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.626123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.626187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.626203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.626209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.626215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.626229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.636099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.636152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.636168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.636175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.636181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.636195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.646097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.646147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.646164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.646171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.646177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.646191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.656082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.656127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.656140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.656146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.656153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.656171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.666116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.666191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.666204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.666210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.666220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.666234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.676090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.676178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.676192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.676198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.676208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.676223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.686103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.686152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.686168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.686175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.686181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.686195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.696109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.733 [2024-11-28 13:10:39.696155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.733 [2024-11-28 13:10:39.696170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.733 [2024-11-28 13:10:39.696177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.733 [2024-11-28 13:10:39.696183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.733 [2024-11-28 13:10:39.696197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.733 qpair failed and we were unable to recover it. 00:40:09.733 [2024-11-28 13:10:39.706106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.706167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.706181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.706187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.706193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.706207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.716106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.716150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.716167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.716174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.716180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.716194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.726120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.726170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.726183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.726190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.726196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.726209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.736090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.736153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.736169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.736176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.736182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.736196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.746093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.746141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.746154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.746164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.746170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.746184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.756040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.756079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.756095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.756101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.756108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.756122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.766145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.766198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.766211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.766218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.766224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.766238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.776168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.776214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.776226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.776233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.776239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.776253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.786176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.786227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.786240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.786247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.786253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.786267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.796139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.796185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.796197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.796207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.796213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.796227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.734 qpair failed and we were unable to recover it. 00:40:09.734 [2024-11-28 13:10:39.806151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.734 [2024-11-28 13:10:39.806208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.734 [2024-11-28 13:10:39.806220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.734 [2024-11-28 13:10:39.806227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.734 [2024-11-28 13:10:39.806233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.734 [2024-11-28 13:10:39.806247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.735 qpair failed and we were unable to recover it. 00:40:09.735 [2024-11-28 13:10:39.816161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.735 [2024-11-28 13:10:39.816215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.735 [2024-11-28 13:10:39.816228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.735 [2024-11-28 13:10:39.816235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.735 [2024-11-28 13:10:39.816241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.735 [2024-11-28 13:10:39.816255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.735 qpair failed and we were unable to recover it. 00:40:09.735 [2024-11-28 13:10:39.826201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.735 [2024-11-28 13:10:39.826248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.735 [2024-11-28 13:10:39.826260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.735 [2024-11-28 13:10:39.826266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.735 [2024-11-28 13:10:39.826273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.735 [2024-11-28 13:10:39.826286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.735 qpair failed and we were unable to recover it. 00:40:09.735 [2024-11-28 13:10:39.836149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.735 [2024-11-28 13:10:39.836205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.735 [2024-11-28 13:10:39.836218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.735 [2024-11-28 13:10:39.836224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.735 [2024-11-28 13:10:39.836230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.735 [2024-11-28 13:10:39.836244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.735 qpair failed and we were unable to recover it. 00:40:09.735 [2024-11-28 13:10:39.846056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.735 [2024-11-28 13:10:39.846101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.735 [2024-11-28 13:10:39.846114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.735 [2024-11-28 13:10:39.846120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.735 [2024-11-28 13:10:39.846126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.735 [2024-11-28 13:10:39.846140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.735 qpair failed and we were unable to recover it. 00:40:09.735 [2024-11-28 13:10:39.856188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.735 [2024-11-28 13:10:39.856234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.735 [2024-11-28 13:10:39.856247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.735 [2024-11-28 13:10:39.856253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.735 [2024-11-28 13:10:39.856260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.735 [2024-11-28 13:10:39.856274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.735 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.866200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.866252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.866265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.866272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.866278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.866292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.876126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.876168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.876181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.876188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.876194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.876208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.886184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.886240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.886253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.886260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.886266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.886280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.896187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.896234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.896247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.896254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.896260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.896277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.906227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.906274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.906287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.906294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.906300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.906314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.916060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.916102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.916115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.916122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.916128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.916142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.926059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.926103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.926116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.926126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.926133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.926147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.998 [2024-11-28 13:10:39.936205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.998 [2024-11-28 13:10:39.936268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.998 [2024-11-28 13:10:39.936281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.998 [2024-11-28 13:10:39.936288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.998 [2024-11-28 13:10:39.936294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.998 [2024-11-28 13:10:39.936308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.998 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:39.946295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:39.946345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:39.946357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:39.946364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:39.946370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:39.946384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:39.956175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:39.956220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:39.956233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:39.956239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:39.956246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:39.956260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:39.966214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:39.966261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:39.966274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:39.966280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:39.966286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:39.966304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:39.976216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:39.976262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:39.976275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:39.976281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:39.976288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:39.976301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:39.986239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:39.986294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:39.986307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:39.986313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:39.986319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:39.986334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:39.996219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:39.996260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:39.996273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:39.996279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:39.996285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:39.996299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.006223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.006272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.006286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.006292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.006299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:40.006314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.016266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.016319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.016333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.016339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.016346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:40.016360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.026217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.026302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.026315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.026322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.026328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:40.026342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.036246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.036307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.036319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.036326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.036332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:40.036346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.046344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.046406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.046418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.046425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.046431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:40.046445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.056323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.056383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.056399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.056406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.056412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:09.999 [2024-11-28 13:10:40.056426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.999 qpair failed and we were unable to recover it. 00:40:09.999 [2024-11-28 13:10:40.066281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.999 [2024-11-28 13:10:40.066338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.999 [2024-11-28 13:10:40.066351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.999 [2024-11-28 13:10:40.066357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.999 [2024-11-28 13:10:40.066363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.000 [2024-11-28 13:10:40.066378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.000 qpair failed and we were unable to recover it. 00:40:10.000 [2024-11-28 13:10:40.076286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.000 [2024-11-28 13:10:40.076336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.000 [2024-11-28 13:10:40.076348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.000 [2024-11-28 13:10:40.076355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.000 [2024-11-28 13:10:40.076362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.000 [2024-11-28 13:10:40.076376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.000 qpair failed and we were unable to recover it. 00:40:10.000 [2024-11-28 13:10:40.086182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.000 [2024-11-28 13:10:40.086230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.000 [2024-11-28 13:10:40.086243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.000 [2024-11-28 13:10:40.086250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.000 [2024-11-28 13:10:40.086256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.000 [2024-11-28 13:10:40.086270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.000 qpair failed and we were unable to recover it. 00:40:10.000 [2024-11-28 13:10:40.096271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.000 [2024-11-28 13:10:40.096323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.000 [2024-11-28 13:10:40.096336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.000 [2024-11-28 13:10:40.096342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.000 [2024-11-28 13:10:40.096349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.000 [2024-11-28 13:10:40.096367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.000 qpair failed and we were unable to recover it. 00:40:10.000 [2024-11-28 13:10:40.106268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.000 [2024-11-28 13:10:40.106318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.000 [2024-11-28 13:10:40.106331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.000 [2024-11-28 13:10:40.106340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.000 [2024-11-28 13:10:40.106347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.000 [2024-11-28 13:10:40.106362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.000 qpair failed and we were unable to recover it. 00:40:10.000 [2024-11-28 13:10:40.116261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.000 [2024-11-28 13:10:40.116302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.000 [2024-11-28 13:10:40.116315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.000 [2024-11-28 13:10:40.116322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.000 [2024-11-28 13:10:40.116328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.000 [2024-11-28 13:10:40.116342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.000 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.126287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.126367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.126380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.126386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.126393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.126407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.136337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.136392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.136405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.136412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.136418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.136432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.146324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.146377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.146389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.146396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.146402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.146416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.156294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.156340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.156353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.156359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.156365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.156379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.166302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.166353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.166365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.166372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.166378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.166392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.176306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.176358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.176371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.176377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.176383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.176397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.186293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.186341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.186357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.186364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.186370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.186385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.196271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.263 [2024-11-28 13:10:40.196314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.263 [2024-11-28 13:10:40.196327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.263 [2024-11-28 13:10:40.196333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.263 [2024-11-28 13:10:40.196340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.263 [2024-11-28 13:10:40.196354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.263 qpair failed and we were unable to recover it. 00:40:10.263 [2024-11-28 13:10:40.206288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.206333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.206345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.206352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.206358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.206372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.216194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.216269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.216283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.216290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.216298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.216312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.226320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.226374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.226386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.226393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.226406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.226420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.236352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.236395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.236408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.236415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.236421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.236435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.246297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.246348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.246361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.246367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.246374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.246387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.256354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.256467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.256480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.256487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.256494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.256508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.266360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.266406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.266418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.266425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.266431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.266445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.276317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.276366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.276379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.276385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.276391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.276405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.286364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.286410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.286422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.286429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.286435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.286449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.296374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.296422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.296434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.296441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.296447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.296461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.306430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.306476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.306488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.306495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.306501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.306515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.316213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.316276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.316292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.316298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.316304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.316318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.326240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.326289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.264 [2024-11-28 13:10:40.326305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.264 [2024-11-28 13:10:40.326312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.264 [2024-11-28 13:10:40.326318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.264 [2024-11-28 13:10:40.326339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.264 qpair failed and we were unable to recover it. 00:40:10.264 [2024-11-28 13:10:40.336374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.264 [2024-11-28 13:10:40.336418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.265 [2024-11-28 13:10:40.336431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.265 [2024-11-28 13:10:40.336438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.265 [2024-11-28 13:10:40.336444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.265 [2024-11-28 13:10:40.336458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.265 qpair failed and we were unable to recover it. 00:40:10.265 [2024-11-28 13:10:40.346413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.265 [2024-11-28 13:10:40.346463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.265 [2024-11-28 13:10:40.346477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.265 [2024-11-28 13:10:40.346483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.265 [2024-11-28 13:10:40.346489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.265 [2024-11-28 13:10:40.346503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.265 qpair failed and we were unable to recover it. 00:40:10.265 [2024-11-28 13:10:40.356346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.265 [2024-11-28 13:10:40.356392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.265 [2024-11-28 13:10:40.356404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.265 [2024-11-28 13:10:40.356414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.265 [2024-11-28 13:10:40.356421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.265 [2024-11-28 13:10:40.356434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.265 qpair failed and we were unable to recover it. 00:40:10.265 [2024-11-28 13:10:40.366365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.265 [2024-11-28 13:10:40.366414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.265 [2024-11-28 13:10:40.366427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.265 [2024-11-28 13:10:40.366433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.265 [2024-11-28 13:10:40.366439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.265 [2024-11-28 13:10:40.366453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.265 qpair failed and we were unable to recover it. 00:40:10.265 [2024-11-28 13:10:40.376393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.265 [2024-11-28 13:10:40.376465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.265 [2024-11-28 13:10:40.376478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.265 [2024-11-28 13:10:40.376484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.265 [2024-11-28 13:10:40.376490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.265 [2024-11-28 13:10:40.376504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.265 qpair failed and we were unable to recover it. 00:40:10.265 [2024-11-28 13:10:40.386324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.265 [2024-11-28 13:10:40.386371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.265 [2024-11-28 13:10:40.386383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.265 [2024-11-28 13:10:40.386390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.265 [2024-11-28 13:10:40.386396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.265 [2024-11-28 13:10:40.386410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.265 qpair failed and we were unable to recover it. 00:40:10.528 [2024-11-28 13:10:40.396242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.528 [2024-11-28 13:10:40.396294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.528 [2024-11-28 13:10:40.396307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.528 [2024-11-28 13:10:40.396314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.528 [2024-11-28 13:10:40.396320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.528 [2024-11-28 13:10:40.396333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.528 qpair failed and we were unable to recover it. 00:40:10.528 [2024-11-28 13:10:40.406422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.528 [2024-11-28 13:10:40.406484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.528 [2024-11-28 13:10:40.406497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.528 [2024-11-28 13:10:40.406504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.528 [2024-11-28 13:10:40.406510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.528 [2024-11-28 13:10:40.406523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.528 qpair failed and we were unable to recover it. 00:40:10.528 [2024-11-28 13:10:40.416396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.528 [2024-11-28 13:10:40.416444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.528 [2024-11-28 13:10:40.416458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.528 [2024-11-28 13:10:40.416465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.528 [2024-11-28 13:10:40.416471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.528 [2024-11-28 13:10:40.416488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.528 qpair failed and we were unable to recover it. 00:40:10.528 [2024-11-28 13:10:40.426412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.528 [2024-11-28 13:10:40.426501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.528 [2024-11-28 13:10:40.426514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.528 [2024-11-28 13:10:40.426521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.528 [2024-11-28 13:10:40.426527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.528 [2024-11-28 13:10:40.426541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.528 qpair failed and we were unable to recover it. 00:40:10.528 [2024-11-28 13:10:40.436401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.528 [2024-11-28 13:10:40.436443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.528 [2024-11-28 13:10:40.436456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.528 [2024-11-28 13:10:40.436462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.528 [2024-11-28 13:10:40.436469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.528 [2024-11-28 13:10:40.436482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.528 qpair failed and we were unable to recover it. 00:40:10.528 [2024-11-28 13:10:40.446399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.446453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.446465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.446472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.446478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.446492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.456403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.456461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.456474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.456480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.456487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.456500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.466483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.466527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.466540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.466547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.466553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.466567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.476442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.476496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.476509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.476515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.476521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.476535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.486407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.486475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.486487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.486497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.486503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.486516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.496429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.496475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.496487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.496494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.496500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.496513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.506447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.506494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.506507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.506513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.506519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.506533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.516401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.516445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.516458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.516464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.516471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.516484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.526440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.526484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.526497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.526503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.526509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.526526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.536367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.536414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.536427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.536434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.536440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.536454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.546457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.546510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.546523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.546530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.546536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.546549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.556488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.556564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.556576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.556583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.556589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.529 [2024-11-28 13:10:40.556602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.529 qpair failed and we were unable to recover it. 00:40:10.529 [2024-11-28 13:10:40.566466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.529 [2024-11-28 13:10:40.566514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.529 [2024-11-28 13:10:40.566527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.529 [2024-11-28 13:10:40.566533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.529 [2024-11-28 13:10:40.566539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.566553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.576462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.576518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.576530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.576537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.576543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.576556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.586492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.586577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.586589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.586596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.586602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.586615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.596476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.596529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.596541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.596548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.596554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.596567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.606470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.606519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.606531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.606537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.606544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.606557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.616538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.616635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.616650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.616657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.616663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.616677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.626517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.626563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.626577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.626583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.626590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.626604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.636494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.636540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.636552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.636559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.636565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.636579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.530 [2024-11-28 13:10:40.646495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.530 [2024-11-28 13:10:40.646543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.530 [2024-11-28 13:10:40.646556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.530 [2024-11-28 13:10:40.646562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.530 [2024-11-28 13:10:40.646568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.530 [2024-11-28 13:10:40.646582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.530 qpair failed and we were unable to recover it. 00:40:10.793 [2024-11-28 13:10:40.656387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.793 [2024-11-28 13:10:40.656439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.793 [2024-11-28 13:10:40.656451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.793 [2024-11-28 13:10:40.656458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.793 [2024-11-28 13:10:40.656467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.793 [2024-11-28 13:10:40.656481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.793 qpair failed and we were unable to recover it. 00:40:10.793 [2024-11-28 13:10:40.666507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.793 [2024-11-28 13:10:40.666551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.793 [2024-11-28 13:10:40.666564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.793 [2024-11-28 13:10:40.666570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.793 [2024-11-28 13:10:40.666576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.793 [2024-11-28 13:10:40.666591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.793 qpair failed and we were unable to recover it. 00:40:10.793 [2024-11-28 13:10:40.676505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.793 [2024-11-28 13:10:40.676568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.793 [2024-11-28 13:10:40.676581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.793 [2024-11-28 13:10:40.676587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.793 [2024-11-28 13:10:40.676594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.793 [2024-11-28 13:10:40.676607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.793 qpair failed and we were unable to recover it. 00:40:10.793 [2024-11-28 13:10:40.686506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.793 [2024-11-28 13:10:40.686590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.793 [2024-11-28 13:10:40.686602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.793 [2024-11-28 13:10:40.686609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.793 [2024-11-28 13:10:40.686615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.793 [2024-11-28 13:10:40.686629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.793 qpair failed and we were unable to recover it. 00:40:10.793 [2024-11-28 13:10:40.696507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.696553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.696565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.696572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.696578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.696591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.706551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.706602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.706614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.706621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.706627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.706641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.716513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.716552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.716564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.716570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.716577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.716590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.726547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.726597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.726609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.726616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.726622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.726636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.736535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.736582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.736594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.736601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.736607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.736620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.746546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.746640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.746655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.746662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.746668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.746682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.756406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.756449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.756462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.756469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.756475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.756489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.766528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.766577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.766589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.766596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.766602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.766616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.776561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.776606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.776618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.776625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.776631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.776644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.786491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.786544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.786556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.786562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.786572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.786585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.796537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.796579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.796592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.796599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.796605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.796619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.806532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.806576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.806589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.806595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.806601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.806615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.816621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.794 [2024-11-28 13:10:40.816669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.794 [2024-11-28 13:10:40.816681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.794 [2024-11-28 13:10:40.816688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.794 [2024-11-28 13:10:40.816694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.794 [2024-11-28 13:10:40.816708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.794 qpair failed and we were unable to recover it. 00:40:10.794 [2024-11-28 13:10:40.826471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.826520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.826533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.826539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.826545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.826559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.836575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.836624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.836637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.836644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.836650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.836668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.846481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.846567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.846583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.846589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.846596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.846616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.856587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.856638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.856651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.856657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.856664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.856677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.866629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.866670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.866683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.866690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.866696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.866709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.876594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.876683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.876702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.876708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.876714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.876728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.886649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.886723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.886736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.886743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.886749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.886763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.896583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.896634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.896647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.896654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.896660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.896674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.906638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.906719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.906731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.906738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.906744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.906758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:10.795 [2024-11-28 13:10:40.916601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.795 [2024-11-28 13:10:40.916645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.795 [2024-11-28 13:10:40.916658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.795 [2024-11-28 13:10:40.916667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.795 [2024-11-28 13:10:40.916674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:10.795 [2024-11-28 13:10:40.916687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:10.795 qpair failed and we were unable to recover it. 00:40:11.059 [2024-11-28 13:10:40.926604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.059 [2024-11-28 13:10:40.926650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.059 [2024-11-28 13:10:40.926663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.059 [2024-11-28 13:10:40.926670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.059 [2024-11-28 13:10:40.926676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.059 [2024-11-28 13:10:40.926690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.059 qpair failed and we were unable to recover it. 00:40:11.059 [2024-11-28 13:10:40.936660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.059 [2024-11-28 13:10:40.936755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.059 [2024-11-28 13:10:40.936767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.059 [2024-11-28 13:10:40.936774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.059 [2024-11-28 13:10:40.936780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.059 [2024-11-28 13:10:40.936794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.059 qpair failed and we were unable to recover it. 00:40:11.059 [2024-11-28 13:10:40.946647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.059 [2024-11-28 13:10:40.946696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.059 [2024-11-28 13:10:40.946708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.059 [2024-11-28 13:10:40.946715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.059 [2024-11-28 13:10:40.946721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.059 [2024-11-28 13:10:40.946735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.059 qpair failed and we were unable to recover it. 00:40:11.059 [2024-11-28 13:10:40.956579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.059 [2024-11-28 13:10:40.956625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.059 [2024-11-28 13:10:40.956638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.059 [2024-11-28 13:10:40.956645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.059 [2024-11-28 13:10:40.956651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.059 [2024-11-28 13:10:40.956668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.059 qpair failed and we were unable to recover it. 00:40:11.059 [2024-11-28 13:10:40.966665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.059 [2024-11-28 13:10:40.966710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.059 [2024-11-28 13:10:40.966723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.059 [2024-11-28 13:10:40.966730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:40.966737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:40.966752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:40.976649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:40.976695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:40.976707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:40.976714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:40.976720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:40.976734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:40.986634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:40.986687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:40.986703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:40.986710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:40.986716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:40.986731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:40.996536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:40.996584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:40.996597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:40.996604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:40.996610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:40.996623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.006645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.006703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.006716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.006723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.006730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.006748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.016541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.016635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.016649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.016655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.016661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.016676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.026725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.026776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.026788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.026795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.026801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.026814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.036637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.036680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.036692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.036699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.036705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.036719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.046663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.046709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.046721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.046732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.046738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.046752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.056662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.056710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.056723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.056729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.056735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.056748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.066687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.066738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.066751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.066757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.066764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.066777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.076663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.076711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.076724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.076730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.076736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.076750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.086651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.086697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.086710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.086716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.086722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.060 [2024-11-28 13:10:41.086740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.060 qpair failed and we were unable to recover it. 00:40:11.060 [2024-11-28 13:10:41.096676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.060 [2024-11-28 13:10:41.096772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.060 [2024-11-28 13:10:41.096785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.060 [2024-11-28 13:10:41.096791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.060 [2024-11-28 13:10:41.096797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.096811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.106735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.106781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.106794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.106800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.106806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.106820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.116694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.116743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.116756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.116762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.116768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.116782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.126734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.126810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.126822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.126829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.126835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.126849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.136686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.136740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.136752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.136759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.136765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.136779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.146715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.146813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.146826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.146833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.146838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.146852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.156685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.156730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.156743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.156750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.156756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.156770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.166589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.166637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.166650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.166656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.166662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.166676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.061 [2024-11-28 13:10:41.176723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.061 [2024-11-28 13:10:41.176768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.061 [2024-11-28 13:10:41.176784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.061 [2024-11-28 13:10:41.176791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.061 [2024-11-28 13:10:41.176796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.061 [2024-11-28 13:10:41.176811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.061 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.186626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.186674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.186687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.186694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.186700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.186719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.196717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.196762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.196775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.196782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.196788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.196802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.206694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.206741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.206754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.206760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.206766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.206780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.216734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.216784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.216796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.216802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.216812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.216826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.226752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.226801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.226814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.226820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.226826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.226840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.236713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.236760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.236772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.236779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.236785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.236798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.246734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.325 [2024-11-28 13:10:41.246782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.325 [2024-11-28 13:10:41.246795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.325 [2024-11-28 13:10:41.246801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.325 [2024-11-28 13:10:41.246807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.325 [2024-11-28 13:10:41.246821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.325 qpair failed and we were unable to recover it. 00:40:11.325 [2024-11-28 13:10:41.256751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.256817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.256829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.256835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.256842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.256856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.266780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.266875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.266888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.266895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.266901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.266915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.276741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.276784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.276797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.276803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.276809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.276823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.286718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.286764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.286776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.286783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.286789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.286803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.296757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.296809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.296821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.296828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.296833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.296847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.306763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.306809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.306825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.306832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.306838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.306851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.316762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.316807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.316820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.316826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.316832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.316846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.326761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.326809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.326821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.326828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.326834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.326848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.336778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.336829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.336842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.336848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.336854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.336868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.346695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.346745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.346759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.346766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.346775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.346790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.356765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.356810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.356824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.356830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.356836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.356850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.366783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.366831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.366844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.366850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.366856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.366870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.376715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.376783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.376796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.326 [2024-11-28 13:10:41.376802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.326 [2024-11-28 13:10:41.376808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.326 [2024-11-28 13:10:41.376822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.326 qpair failed and we were unable to recover it. 00:40:11.326 [2024-11-28 13:10:41.386793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.326 [2024-11-28 13:10:41.386841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.326 [2024-11-28 13:10:41.386853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.386860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.386866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.386879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.327 [2024-11-28 13:10:41.396749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.327 [2024-11-28 13:10:41.396801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.327 [2024-11-28 13:10:41.396826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.396834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.396840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.396860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.327 [2024-11-28 13:10:41.406790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.327 [2024-11-28 13:10:41.406843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.327 [2024-11-28 13:10:41.406868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.406876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.406883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.406902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.327 [2024-11-28 13:10:41.416801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.327 [2024-11-28 13:10:41.416860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.327 [2024-11-28 13:10:41.416885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.416893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.416899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.416919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.327 [2024-11-28 13:10:41.426837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.327 [2024-11-28 13:10:41.426884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.327 [2024-11-28 13:10:41.426898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.426905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.426911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.426926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.327 [2024-11-28 13:10:41.436794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.327 [2024-11-28 13:10:41.436847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.327 [2024-11-28 13:10:41.436876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.436884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.436891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.436911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.327 [2024-11-28 13:10:41.446796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.327 [2024-11-28 13:10:41.446845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.327 [2024-11-28 13:10:41.446869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.327 [2024-11-28 13:10:41.446878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.327 [2024-11-28 13:10:41.446884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.327 [2024-11-28 13:10:41.446903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.327 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.456821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.456878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.456903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.456911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.456918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.456937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.466827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.466897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.466921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.466930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.466937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.466956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.476814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.476865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.476880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.476891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.476898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.476913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.486825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.486871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.486884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.486891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.486897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.486911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.496820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.496868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.496881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.496888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.496894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.496908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.506858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.506903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.506916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.506922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.506928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.506942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.516703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.516751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.516763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.516770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.516776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.516790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.526838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.526886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.526899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.526906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.526912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.590 [2024-11-28 13:10:41.526925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.590 qpair failed and we were unable to recover it. 00:40:11.590 [2024-11-28 13:10:41.536742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.590 [2024-11-28 13:10:41.536796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.590 [2024-11-28 13:10:41.536809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.590 [2024-11-28 13:10:41.536816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.590 [2024-11-28 13:10:41.536822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.536841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.546853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.546899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.546911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.546918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.546924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.546938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.556816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.556857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.556870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.556876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.556882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.556896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.566852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.566927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.566940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.566947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.566953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.566967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.576864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.576914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.576927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.576934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.576940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.576954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.586842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.586894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.586907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.586913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.586920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.586933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.596856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.596901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.596914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.596920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.596926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.596940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.606871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.606919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.606932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.606946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.606952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.606966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.616910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.616969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.616981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.616988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.616994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.617008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.626877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.626927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.626939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.626946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.626952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.626966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.636873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.636920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.636932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.636939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.636945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.636959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.646855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.646902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.646915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.646921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.646928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.646944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.656867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.656925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.656937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.656943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.656949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.656963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.666883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.591 [2024-11-28 13:10:41.666928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.591 [2024-11-28 13:10:41.666942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.591 [2024-11-28 13:10:41.666949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.591 [2024-11-28 13:10:41.666955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.591 [2024-11-28 13:10:41.666969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.591 qpair failed and we were unable to recover it. 00:40:11.591 [2024-11-28 13:10:41.676944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.592 [2024-11-28 13:10:41.677005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.592 [2024-11-28 13:10:41.677018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.592 [2024-11-28 13:10:41.677025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.592 [2024-11-28 13:10:41.677031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.592 [2024-11-28 13:10:41.677045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.592 qpair failed and we were unable to recover it. 00:40:11.592 [2024-11-28 13:10:41.686884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.592 [2024-11-28 13:10:41.686936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.592 [2024-11-28 13:10:41.686949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.592 [2024-11-28 13:10:41.686956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.592 [2024-11-28 13:10:41.686962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.592 [2024-11-28 13:10:41.686976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.592 qpair failed and we were unable to recover it. 00:40:11.592 [2024-11-28 13:10:41.696874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.592 [2024-11-28 13:10:41.696920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.592 [2024-11-28 13:10:41.696933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.592 [2024-11-28 13:10:41.696940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.592 [2024-11-28 13:10:41.696946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.592 [2024-11-28 13:10:41.696960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.592 qpair failed and we were unable to recover it. 00:40:11.592 [2024-11-28 13:10:41.706916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.592 [2024-11-28 13:10:41.706955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.592 [2024-11-28 13:10:41.706968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.592 [2024-11-28 13:10:41.706975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.592 [2024-11-28 13:10:41.706981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.592 [2024-11-28 13:10:41.706995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.592 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.716893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.716936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.716948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.716955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.716961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.716975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.726876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.726919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.726932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.726939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.726945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.726958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.736942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.737020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.737036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.737042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.737048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.737062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.746939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.746980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.746993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.746999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.747005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.747019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.756923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.756966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.756979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.756986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.756992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.757006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.766940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.767011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.767024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.767031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.767037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.767050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.776929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.776979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.776991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.776998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.777007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.777021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.786900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.786945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.786958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.786965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.786971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.786985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.796948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.796999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.797011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.797019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.797025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.797039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.806953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.806997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.807009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.807016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.807022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.807036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.816961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.817006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.817018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.817025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.817031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.817045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.826920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.826965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.826978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.826984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.855 [2024-11-28 13:10:41.826991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.855 [2024-11-28 13:10:41.827004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.855 qpair failed and we were unable to recover it. 00:40:11.855 [2024-11-28 13:10:41.836957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.855 [2024-11-28 13:10:41.837007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.855 [2024-11-28 13:10:41.837019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.855 [2024-11-28 13:10:41.837026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.837032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.837046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.846965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.847014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.847027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.847033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.847039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.847053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.856979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.857032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.857044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.857051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.857057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.857071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.866973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.867019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.867035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.867041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.867048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.867062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.876975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.877020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.877033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.877039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.877045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.877059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.886999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.887044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.887058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.887065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.887071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.887085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.896994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.897041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.897054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.897061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.897067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.897081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.906859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.906907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.906920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.906927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.906936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.906950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.917016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.917064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.917077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.917084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.917090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.917104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.926991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.927043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.927055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.927062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.927068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.927082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.937022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.937070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.937082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.937089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.937095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.937109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.946996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.947041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.947053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.947060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.947066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.947080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.956996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.957039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.957051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.957058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.957064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.957077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.856 qpair failed and we were unable to recover it. 00:40:11.856 [2024-11-28 13:10:41.967015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.856 [2024-11-28 13:10:41.967069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.856 [2024-11-28 13:10:41.967082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.856 [2024-11-28 13:10:41.967088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.856 [2024-11-28 13:10:41.967094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.856 [2024-11-28 13:10:41.967108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.857 qpair failed and we were unable to recover it. 00:40:11.857 [2024-11-28 13:10:41.977001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.857 [2024-11-28 13:10:41.977047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.857 [2024-11-28 13:10:41.977060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.857 [2024-11-28 13:10:41.977067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.857 [2024-11-28 13:10:41.977073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:11.857 [2024-11-28 13:10:41.977087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:11.857 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:41.987017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:41.987061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:41.987073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:41.987080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:41.987086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:41.987100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:41.997032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:41.997079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:41.997095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:41.997101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:41.997107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:41.997121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:42.006999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:42.007049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:42.007061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:42.007068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:42.007075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:42.007089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:42.017030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:42.017075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:42.017088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:42.017094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:42.017101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:42.017115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:42.027057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:42.027101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:42.027114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:42.027121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:42.027127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:42.027140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:42.037029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:42.037073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:42.037085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:42.037096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:42.037102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:42.037116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:42.047027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:42.047085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:42.047097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:42.047104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:42.047110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:42.047123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.118 qpair failed and we were unable to recover it. 00:40:12.118 [2024-11-28 13:10:42.057049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.118 [2024-11-28 13:10:42.057098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.118 [2024-11-28 13:10:42.057110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.118 [2024-11-28 13:10:42.057117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.118 [2024-11-28 13:10:42.057123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.118 [2024-11-28 13:10:42.057137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.066930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.066978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.066991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.066998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.067004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.067018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.077047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.077088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.077101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.077108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.077114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.077131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.087009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.087056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.087068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.087075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.087081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.087095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.097073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.097123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.097135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.097141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.097147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.097165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.107062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.107106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.107119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.107125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.107131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.107145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.117058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.117126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.117138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.117145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.117151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.117168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.127073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.127125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.127137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.127144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.127150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.127167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.137070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.137117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.137130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.137136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.137142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.137156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.147074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.147135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.147148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.147154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.147164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.147178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.157073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.157117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.157129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.157136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.157142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.157156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.167086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.167140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.167153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.167166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.167173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.167187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.177071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.177123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.119 [2024-11-28 13:10:42.177135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.119 [2024-11-28 13:10:42.177142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.119 [2024-11-28 13:10:42.177148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.119 [2024-11-28 13:10:42.177164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.119 qpair failed and we were unable to recover it. 00:40:12.119 [2024-11-28 13:10:42.187059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.119 [2024-11-28 13:10:42.187101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.120 [2024-11-28 13:10:42.187114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.120 [2024-11-28 13:10:42.187120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.120 [2024-11-28 13:10:42.187126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.120 [2024-11-28 13:10:42.187139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.120 qpair failed and we were unable to recover it. 00:40:12.120 [2024-11-28 13:10:42.197091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.120 [2024-11-28 13:10:42.197136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.120 [2024-11-28 13:10:42.197149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.120 [2024-11-28 13:10:42.197155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.120 [2024-11-28 13:10:42.197165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.120 [2024-11-28 13:10:42.197179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.120 qpair failed and we were unable to recover it. 00:40:12.120 [2024-11-28 13:10:42.207093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.120 [2024-11-28 13:10:42.207137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.120 [2024-11-28 13:10:42.207149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.120 [2024-11-28 13:10:42.207155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.120 [2024-11-28 13:10:42.207165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.120 [2024-11-28 13:10:42.207182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.120 qpair failed and we were unable to recover it. 00:40:12.120 [2024-11-28 13:10:42.217105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.120 [2024-11-28 13:10:42.217161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.120 [2024-11-28 13:10:42.217174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.120 [2024-11-28 13:10:42.217180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.120 [2024-11-28 13:10:42.217186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.120 [2024-11-28 13:10:42.217200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.120 qpair failed and we were unable to recover it. 00:40:12.120 [2024-11-28 13:10:42.227095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.120 [2024-11-28 13:10:42.227145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.120 [2024-11-28 13:10:42.227161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.120 [2024-11-28 13:10:42.227168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.120 [2024-11-28 13:10:42.227174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.120 [2024-11-28 13:10:42.227188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.120 qpair failed and we were unable to recover it. 00:40:12.120 [2024-11-28 13:10:42.237074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.120 [2024-11-28 13:10:42.237114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.120 [2024-11-28 13:10:42.237127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.120 [2024-11-28 13:10:42.237134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.120 [2024-11-28 13:10:42.237140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.120 [2024-11-28 13:10:42.237154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.120 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.246988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.247034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.247046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.247053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.247059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.247073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.257106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.257155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.257170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.257177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.257183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.257197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.267077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.267115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.267127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.267134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.267140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.267153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.277109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.277155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.277171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.277177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.277183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.277198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.287119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.287171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.287185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.287195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.287201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.287216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.297120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.297190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.297209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.297216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.297222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.297236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.307105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.307149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.307164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.307171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.307177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.307191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.317136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.317184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.317197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.317203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.317209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.317223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.327124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.327184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.327196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.327202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.327209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.327223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.337144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.337200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.337213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.337219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.337228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.382 [2024-11-28 13:10:42.337243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.382 qpair failed and we were unable to recover it. 00:40:12.382 [2024-11-28 13:10:42.347106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.382 [2024-11-28 13:10:42.347146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.382 [2024-11-28 13:10:42.347162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.382 [2024-11-28 13:10:42.347169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.382 [2024-11-28 13:10:42.347175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.347189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.357140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.357184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.357197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.357203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.357209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.357223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.367144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.367191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.367204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.367211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.367217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.367231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.377175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.377221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.377234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.377240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.377246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.377260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.387146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.387198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.387211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.387217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.387223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.387237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.397050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.397094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.397108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.397115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.397121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.397136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.407163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.407211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.407224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.407230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.407236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.407250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.417162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.417215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.417227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.417234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.417240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.417254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.427143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.427206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.427222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.427228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.427234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.427248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.437034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.437075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.437088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.437095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.437100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.437114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.447180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.447223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.447236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.447243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.447248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.447262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.457190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.457237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.457250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.457256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.457263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.457277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.467143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.383 [2024-11-28 13:10:42.467198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.383 [2024-11-28 13:10:42.467211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.383 [2024-11-28 13:10:42.467218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.383 [2024-11-28 13:10:42.467227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.383 [2024-11-28 13:10:42.467242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.383 qpair failed and we were unable to recover it. 00:40:12.383 [2024-11-28 13:10:42.477157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.384 [2024-11-28 13:10:42.477202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.384 [2024-11-28 13:10:42.477214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.384 [2024-11-28 13:10:42.477221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.384 [2024-11-28 13:10:42.477227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.384 [2024-11-28 13:10:42.477241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.384 qpair failed and we were unable to recover it. 00:40:12.384 [2024-11-28 13:10:42.487193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.384 [2024-11-28 13:10:42.487250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.384 [2024-11-28 13:10:42.487262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.384 [2024-11-28 13:10:42.487269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.384 [2024-11-28 13:10:42.487275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.384 [2024-11-28 13:10:42.487289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.384 qpair failed and we were unable to recover it. 00:40:12.384 [2024-11-28 13:10:42.497171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.384 [2024-11-28 13:10:42.497219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.384 [2024-11-28 13:10:42.497231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.384 [2024-11-28 13:10:42.497238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.384 [2024-11-28 13:10:42.497244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.384 [2024-11-28 13:10:42.497258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.384 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.507197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.507241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.507254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.507260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.507266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.507280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.517191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.517238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.517251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.517257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.517263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.517277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.527166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.527216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.527229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.527236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.527242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.527256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.537213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.537266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.537279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.537285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.537291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.537305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.547212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.547284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.547297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.547303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.547309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.547323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.557081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.557124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.557140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.557146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.557153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.557176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.567218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.651 [2024-11-28 13:10:42.567306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.651 [2024-11-28 13:10:42.567319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.651 [2024-11-28 13:10:42.567326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.651 [2024-11-28 13:10:42.567332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.651 [2024-11-28 13:10:42.567346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.651 qpair failed and we were unable to recover it. 00:40:12.651 [2024-11-28 13:10:42.577188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.577233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.577245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.577252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.577258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.577272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.587089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.587133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.587147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.587153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.587163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.587178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.597224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.597288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.597298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.597305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.597310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.597320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.607200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.607242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.607251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.607255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.607260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.607270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.617235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.617277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.617287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.617291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.617296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.617305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.627249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.627284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.627294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.627298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.627302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.627312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.637232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.637269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.637279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.637283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.637287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.637300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.647241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.647285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.647294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.647299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.647303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.647313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.657266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.657307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.657316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.657321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.657325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.657335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.667240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.667318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.667327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.667332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.667336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.667346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.677230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.677270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.677279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.677284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.677288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.677298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.687247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.687314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.687324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.687328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.687332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.687342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.697264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.652 [2024-11-28 13:10:42.697309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.652 [2024-11-28 13:10:42.697319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.652 [2024-11-28 13:10:42.697323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.652 [2024-11-28 13:10:42.697327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.652 [2024-11-28 13:10:42.697337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.652 qpair failed and we were unable to recover it. 00:40:12.652 [2024-11-28 13:10:42.707264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.707303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.707312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.707317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.707321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.707331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.653 [2024-11-28 13:10:42.717255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.717307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.717317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.717321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.717325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.717335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.653 [2024-11-28 13:10:42.727179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.727219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.727228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.727235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.727239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.727249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.653 [2024-11-28 13:10:42.737288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.737331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.737341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.737345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.737349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.737359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.653 [2024-11-28 13:10:42.747254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.747291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.747300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.747304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.747308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.747318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.653 [2024-11-28 13:10:42.757153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.757195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.757204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.757209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.757213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.757222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.653 [2024-11-28 13:10:42.767298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.653 [2024-11-28 13:10:42.767388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.653 [2024-11-28 13:10:42.767398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.653 [2024-11-28 13:10:42.767402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.653 [2024-11-28 13:10:42.767406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.653 [2024-11-28 13:10:42.767419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.653 qpair failed and we were unable to recover it. 00:40:12.961 [2024-11-28 13:10:42.777243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.961 [2024-11-28 13:10:42.777284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.961 [2024-11-28 13:10:42.777293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.961 [2024-11-28 13:10:42.777297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.961 [2024-11-28 13:10:42.777301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.961 [2024-11-28 13:10:42.777311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.961 qpair failed and we were unable to recover it. 00:40:12.961 [2024-11-28 13:10:42.787307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.961 [2024-11-28 13:10:42.787349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.961 [2024-11-28 13:10:42.787358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.961 [2024-11-28 13:10:42.787363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.961 [2024-11-28 13:10:42.787367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.961 [2024-11-28 13:10:42.787377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.961 qpair failed and we were unable to recover it. 00:40:12.961 [2024-11-28 13:10:42.797309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.961 [2024-11-28 13:10:42.797354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.797363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.797367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.797371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.962 [2024-11-28 13:10:42.797381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.807315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.807357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.807366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.807370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.807374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.962 [2024-11-28 13:10:42.807384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.817318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.817360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.817370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.817374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.817378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.962 [2024-11-28 13:10:42.817388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.827301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.827337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.827347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.827351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.827355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa608000b90 00:40:12.962 [2024-11-28 13:10:42.827365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.837353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.837462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.837526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.837550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.837571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa5fc000b90 00:40:12.962 [2024-11-28 13:10:42.837625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.847245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.847315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.847349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.847366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.847381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa5fc000b90 00:40:12.962 [2024-11-28 13:10:42.847419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.857350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.857456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.857529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.857554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.857574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa600000b90 00:40:12.962 [2024-11-28 13:10:42.857628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.867337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.867400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.867429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.867445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.867458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa600000b90 00:40:12.962 [2024-11-28 13:10:42.867489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.877315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.877469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.877533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.877559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.877578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6de090 00:40:12.962 [2024-11-28 13:10:42.877631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:40:12.962 qpair failed and we were unable to recover it. 00:40:12.962 [2024-11-28 13:10:42.887311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.962 [2024-11-28 13:10:42.887382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.962 [2024-11-28 13:10:42.887412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.962 [2024-11-28 13:10:42.887427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.962 [2024-11-28 13:10:42.887441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6de090 00:40:12.962 [2024-11-28 13:10:42.887471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:40:12.963 qpair failed and we were unable to recover it. 00:40:12.963 [2024-11-28 13:10:42.887617] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:40:12.963 A controller has encountered a failure and is being reset. 00:40:12.963 [2024-11-28 13:10:42.887731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ebe70 (9): Bad file descriptor 00:40:12.963 Controller properly reset. 00:40:12.963 Initializing NVMe Controllers 00:40:12.963 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:12.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:12.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:12.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:12.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:12.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:12.963 Initialization complete. Launching workers. 00:40:12.963 Starting thread on core 1 00:40:12.963 Starting thread on core 2 00:40:12.963 Starting thread on core 3 00:40:12.963 Starting thread on core 0 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:40:12.963 00:40:12.963 real 0m11.531s 00:40:12.963 user 0m21.320s 00:40:12.963 sys 0m3.767s 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:12.963 ************************************ 00:40:12.963 END TEST nvmf_target_disconnect_tc2 00:40:12.963 ************************************ 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:12.963 13:10:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:12.963 rmmod nvme_tcp 00:40:12.963 rmmod nvme_fabrics 00:40:12.963 rmmod nvme_keyring 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3681448 ']' 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3681448 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3681448 ']' 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3681448 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:12.963 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681448 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681448' 00:40:13.252 killing process with pid 3681448 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3681448 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3681448 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.252 13:10:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.799 13:10:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:15.799 00:40:15.800 real 0m21.936s 00:40:15.800 user 0m49.169s 00:40:15.800 sys 0m9.921s 00:40:15.800 13:10:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.800 13:10:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:15.800 ************************************ 00:40:15.800 END TEST nvmf_target_disconnect 00:40:15.800 ************************************ 00:40:15.800 13:10:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:15.800 00:40:15.800 real 8m0.137s 00:40:15.800 user 17m25.979s 00:40:15.800 sys 2m26.183s 00:40:15.800 13:10:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.800 13:10:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:15.800 ************************************ 00:40:15.800 END TEST nvmf_host 00:40:15.800 ************************************ 00:40:15.800 13:10:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:40:15.800 13:10:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:40:15.800 13:10:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:40:15.800 13:10:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:15.800 13:10:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.800 13:10:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.800 ************************************ 00:40:15.800 START TEST nvmf_target_core_interrupt_mode 00:40:15.800 ************************************ 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:40:15.800 * Looking for test storage... 00:40:15.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:15.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.800 --rc genhtml_branch_coverage=1 00:40:15.800 --rc genhtml_function_coverage=1 00:40:15.800 --rc genhtml_legend=1 00:40:15.800 --rc geninfo_all_blocks=1 00:40:15.800 --rc geninfo_unexecuted_blocks=1 00:40:15.800 00:40:15.800 ' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:15.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.800 --rc genhtml_branch_coverage=1 00:40:15.800 --rc genhtml_function_coverage=1 00:40:15.800 --rc genhtml_legend=1 00:40:15.800 --rc geninfo_all_blocks=1 00:40:15.800 --rc geninfo_unexecuted_blocks=1 00:40:15.800 00:40:15.800 ' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:15.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.800 --rc genhtml_branch_coverage=1 00:40:15.800 --rc genhtml_function_coverage=1 00:40:15.800 --rc genhtml_legend=1 00:40:15.800 --rc geninfo_all_blocks=1 00:40:15.800 --rc geninfo_unexecuted_blocks=1 00:40:15.800 00:40:15.800 ' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:15.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.800 --rc genhtml_branch_coverage=1 00:40:15.800 --rc genhtml_function_coverage=1 00:40:15.800 --rc genhtml_legend=1 00:40:15.800 --rc geninfo_all_blocks=1 00:40:15.800 --rc geninfo_unexecuted_blocks=1 00:40:15.800 00:40:15.800 ' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.800 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:15.801 ************************************ 00:40:15.801 START TEST nvmf_abort 00:40:15.801 ************************************ 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:40:15.801 * Looking for test storage... 00:40:15.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:15.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.801 --rc genhtml_branch_coverage=1 00:40:15.801 --rc genhtml_function_coverage=1 00:40:15.801 --rc genhtml_legend=1 00:40:15.801 --rc geninfo_all_blocks=1 00:40:15.801 --rc geninfo_unexecuted_blocks=1 00:40:15.801 00:40:15.801 ' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:15.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.801 --rc genhtml_branch_coverage=1 00:40:15.801 --rc genhtml_function_coverage=1 00:40:15.801 --rc genhtml_legend=1 00:40:15.801 --rc geninfo_all_blocks=1 00:40:15.801 --rc geninfo_unexecuted_blocks=1 00:40:15.801 00:40:15.801 ' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:15.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.801 --rc genhtml_branch_coverage=1 00:40:15.801 --rc genhtml_function_coverage=1 00:40:15.801 --rc genhtml_legend=1 00:40:15.801 --rc geninfo_all_blocks=1 00:40:15.801 --rc geninfo_unexecuted_blocks=1 00:40:15.801 00:40:15.801 ' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:15.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.801 --rc genhtml_branch_coverage=1 00:40:15.801 --rc genhtml_function_coverage=1 00:40:15.801 --rc genhtml_legend=1 00:40:15.801 --rc geninfo_all_blocks=1 00:40:15.801 --rc geninfo_unexecuted_blocks=1 00:40:15.801 00:40:15.801 ' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.801 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.802 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:16.062 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:16.062 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:16.062 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:16.062 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:16.062 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:16.062 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:40:16.063 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:24.208 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:24.208 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:24.208 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:24.208 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:24.209 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:24.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:24.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:40:24.209 00:40:24.209 --- 10.0.0.2 ping statistics --- 00:40:24.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:24.209 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:24.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:24.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:40:24.209 00:40:24.209 --- 10.0.0.1 ping statistics --- 00:40:24.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:24.209 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3686904 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3686904 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3686904 ']' 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:24.209 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.209 [2024-11-28 13:10:53.431384] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:24.209 [2024-11-28 13:10:53.432545] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:40:24.209 [2024-11-28 13:10:53.432597] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:24.209 [2024-11-28 13:10:53.577408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:24.209 [2024-11-28 13:10:53.637497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:24.209 [2024-11-28 13:10:53.665591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:24.209 [2024-11-28 13:10:53.665661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:24.209 [2024-11-28 13:10:53.665671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:24.209 [2024-11-28 13:10:53.665678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:24.209 [2024-11-28 13:10:53.665684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:24.209 [2024-11-28 13:10:53.667494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:24.209 [2024-11-28 13:10:53.667655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.209 [2024-11-28 13:10:53.667656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:24.209 [2024-11-28 13:10:53.735406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:24.209 [2024-11-28 13:10:53.736311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:24.209 [2024-11-28 13:10:53.736801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:24.209 [2024-11-28 13:10:53.736948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.209 [2024-11-28 13:10:54.312570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.209 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:40:24.210 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.210 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.471 Malloc0 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.471 Delay0 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:24.471 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.472 [2024-11-28 13:10:54.416557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.472 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:40:24.733 [2024-11-28 13:10:54.660922] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:27.278 Initializing NVMe Controllers 00:40:27.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:27.278 controller IO queue size 128 less than required 00:40:27.278 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:40:27.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:40:27.278 Initialization complete. Launching workers. 00:40:27.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 28473 00:40:27.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28531, failed to submit 66 00:40:27.278 success 28473, unsuccessful 58, failed 0 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.278 rmmod nvme_tcp 00:40:27.278 rmmod nvme_fabrics 00:40:27.278 rmmod nvme_keyring 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3686904 ']' 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3686904 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3686904 ']' 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3686904 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.278 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3686904 00:40:27.279 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:27.279 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:27.279 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3686904' 00:40:27.279 killing process with pid 3686904 00:40:27.279 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3686904 00:40:27.279 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3686904 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.279 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.193 00:40:29.193 real 0m13.483s 00:40:29.193 user 0m11.429s 00:40:29.193 sys 0m6.866s 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:29.193 ************************************ 00:40:29.193 END TEST nvmf_abort 00:40:29.193 ************************************ 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:29.193 ************************************ 00:40:29.193 START TEST nvmf_ns_hotplug_stress 00:40:29.193 ************************************ 00:40:29.193 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:40:29.455 * Looking for test storage... 00:40:29.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.456 --rc genhtml_branch_coverage=1 00:40:29.456 --rc genhtml_function_coverage=1 00:40:29.456 --rc genhtml_legend=1 00:40:29.456 --rc geninfo_all_blocks=1 00:40:29.456 --rc geninfo_unexecuted_blocks=1 00:40:29.456 00:40:29.456 ' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.456 --rc genhtml_branch_coverage=1 00:40:29.456 --rc genhtml_function_coverage=1 00:40:29.456 --rc genhtml_legend=1 00:40:29.456 --rc geninfo_all_blocks=1 00:40:29.456 --rc geninfo_unexecuted_blocks=1 00:40:29.456 00:40:29.456 ' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.456 --rc genhtml_branch_coverage=1 00:40:29.456 --rc genhtml_function_coverage=1 00:40:29.456 --rc genhtml_legend=1 00:40:29.456 --rc geninfo_all_blocks=1 00:40:29.456 --rc geninfo_unexecuted_blocks=1 00:40:29.456 00:40:29.456 ' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.456 --rc genhtml_branch_coverage=1 00:40:29.456 --rc genhtml_function_coverage=1 00:40:29.456 --rc genhtml_legend=1 00:40:29.456 --rc geninfo_all_blocks=1 00:40:29.456 --rc geninfo_unexecuted_blocks=1 00:40:29.456 00:40:29.456 ' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:29.456 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:40:29.457 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:37.594 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:37.594 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:40:37.594 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:37.594 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:37.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:37.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:37.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:37.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:37.595 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:37.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:37.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:40:37.596 00:40:37.596 --- 10.0.0.2 ping statistics --- 00:40:37.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.596 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:37.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:37.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:40:37.596 00:40:37.596 --- 10.0.0.1 ping statistics --- 00:40:37.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.596 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3691589 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3691589 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3691589 ']' 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:37.596 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:37.596 [2024-11-28 13:11:07.043924] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:37.596 [2024-11-28 13:11:07.045005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:40:37.596 [2024-11-28 13:11:07.045049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:37.596 [2024-11-28 13:11:07.187522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:37.596 [2024-11-28 13:11:07.246537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:37.596 [2024-11-28 13:11:07.266375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.596 [2024-11-28 13:11:07.266411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.596 [2024-11-28 13:11:07.266420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.596 [2024-11-28 13:11:07.266427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.596 [2024-11-28 13:11:07.266433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.596 [2024-11-28 13:11:07.267892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:37.596 [2024-11-28 13:11:07.268045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:37.596 [2024-11-28 13:11:07.268046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:37.596 [2024-11-28 13:11:07.325178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:37.596 [2024-11-28 13:11:07.326152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:37.596 [2024-11-28 13:11:07.326857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:37.596 [2024-11-28 13:11:07.326981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:40:37.859 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:38.120 [2024-11-28 13:11:08.040848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:38.120 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:38.381 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:38.381 [2024-11-28 13:11:08.433542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:38.381 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:38.641 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:40:38.902 Malloc0 00:40:38.902 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:38.902 Delay0 00:40:38.902 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:39.163 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:40:39.425 NULL1 00:40:39.425 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:40:39.686 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3692196 00:40:39.686 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:40:39.686 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:39.686 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:39.686 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:39.946 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:40:39.946 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:40:40.205 true 00:40:40.205 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:40.205 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:40.205 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:40.465 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:40:40.465 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:40:40.725 true 00:40:40.725 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:40.725 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:40.986 13:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:40.986 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:40:40.986 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:40:41.247 true 00:40:41.247 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:41.247 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:41.507 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:41.766 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:40:41.766 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:40:41.766 true 00:40:41.766 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:41.766 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:42.026 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:42.286 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:40:42.286 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:40:42.286 true 00:40:42.546 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:42.546 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:42.546 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:42.806 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:40:42.806 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:40:42.806 true 00:40:42.806 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:42.806 13:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:43.066 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:43.326 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:40:43.326 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:40:43.326 true 00:40:43.586 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:43.586 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:43.586 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:43.845 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:40:43.845 13:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:40:44.104 true 00:40:44.104 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:44.104 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:44.104 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:44.363 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:40:44.363 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:40:44.622 true 00:40:44.622 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:44.622 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:44.882 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:44.882 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:40:44.882 13:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:40:45.147 true 00:40:45.147 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:45.147 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:45.407 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:45.407 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:40:45.407 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:40:45.667 true 00:40:45.667 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:45.667 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:45.926 13:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:45.926 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:40:45.926 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:40:46.187 true 00:40:46.187 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:46.187 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:46.448 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:46.709 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:40:46.709 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:40:46.709 true 00:40:46.709 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:46.709 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:46.970 13:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:47.231 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:40:47.231 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:40:47.231 true 00:40:47.231 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:47.231 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:47.492 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:47.752 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:40:47.752 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:40:47.752 true 00:40:48.014 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:48.014 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:48.014 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:48.276 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:40:48.276 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:40:48.538 true 00:40:48.538 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:48.538 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:48.538 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:48.799 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:40:48.799 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:40:49.059 true 00:40:49.059 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:49.059 13:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:49.059 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:49.320 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:40:49.320 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:40:49.581 true 00:40:49.581 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:49.581 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:49.842 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:49.842 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:40:49.842 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:40:50.121 true 00:40:50.121 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:50.121 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:50.382 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:50.382 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:40:50.382 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:40:50.644 true 00:40:50.644 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:50.644 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:50.903 13:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:51.164 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:40:51.164 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:40:51.164 true 00:40:51.164 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:51.164 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:51.425 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:51.686 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:40:51.686 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:40:51.686 true 00:40:51.686 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:51.686 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:51.947 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:52.209 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:40:52.209 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:40:52.209 true 00:40:52.470 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:52.470 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:52.471 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:52.733 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:40:52.733 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:40:52.994 true 00:40:52.994 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:52.994 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:52.994 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:53.256 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:40:53.256 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:40:53.517 true 00:40:53.517 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:53.517 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:53.778 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:53.778 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:40:53.778 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:40:54.040 true 00:40:54.040 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:54.040 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:54.301 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:54.301 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:40:54.301 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:40:54.562 true 00:40:54.562 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:54.562 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:54.823 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:55.100 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:40:55.100 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:40:55.100 true 00:40:55.100 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:55.100 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:55.360 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:55.621 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:40:55.621 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:40:55.621 true 00:40:55.621 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:55.621 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:55.881 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:56.142 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:40:56.142 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:40:56.142 true 00:40:56.403 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:56.403 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:56.403 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:56.664 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:40:56.664 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:40:56.664 true 00:40:56.925 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:56.925 13:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:56.925 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:57.185 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:40:57.185 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:40:57.446 true 00:40:57.446 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:57.446 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:57.446 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:57.707 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:40:57.707 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:40:57.968 true 00:40:57.968 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:57.968 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:58.229 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:58.229 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:40:58.229 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:40:58.490 true 00:40:58.490 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:58.490 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:58.751 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:58.751 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:40:58.751 13:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:40:59.012 true 00:40:59.012 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:59.012 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:59.274 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:59.534 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:40:59.534 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:40:59.534 true 00:40:59.534 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:40:59.534 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:59.795 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:00.057 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:41:00.057 13:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:41:00.057 true 00:41:00.317 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:00.317 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:00.317 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:00.578 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:41:00.578 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:41:00.839 true 00:41:00.839 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:00.839 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:00.839 13:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:01.099 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:41:01.099 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:41:01.359 true 00:41:01.359 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:01.359 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:01.359 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:01.621 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:41:01.621 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:41:01.883 true 00:41:01.883 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:01.883 13:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:02.145 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:02.145 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:41:02.145 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:41:02.407 true 00:41:02.407 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:02.407 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:02.667 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:02.667 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:41:02.667 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:41:02.927 true 00:41:02.927 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:02.927 13:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:03.188 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:03.188 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:41:03.188 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:41:03.450 true 00:41:03.450 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:03.450 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:03.712 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:03.973 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:41:03.973 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:41:03.973 true 00:41:03.973 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:03.973 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:04.235 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:04.497 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:41:04.497 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:41:04.497 true 00:41:04.497 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:04.497 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:04.759 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:05.020 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:41:05.020 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:41:05.020 true 00:41:05.020 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:05.020 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:05.280 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:05.539 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:41:05.539 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:41:05.799 true 00:41:05.799 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:05.799 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:05.799 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:06.060 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:41:06.060 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:41:06.321 true 00:41:06.321 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:06.321 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:06.321 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:06.647 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:41:06.648 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:41:06.913 true 00:41:06.913 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:06.913 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:06.913 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:07.206 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:41:07.206 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:41:07.467 true 00:41:07.467 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:07.467 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:07.467 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:07.729 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:41:07.729 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:41:07.990 true 00:41:07.990 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:07.990 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:08.252 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:08.252 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:41:08.252 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:41:08.512 true 00:41:08.512 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:08.512 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:08.774 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:08.774 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:41:08.774 13:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:41:09.035 true 00:41:09.035 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:09.035 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:09.296 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:09.558 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:41:09.558 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:41:09.558 true 00:41:09.558 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:09.558 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:09.819 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:09.819 Initializing NVMe Controllers 00:41:09.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:09.819 Controller IO queue size 128, less than required. 00:41:09.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:09.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:09.819 Initialization complete. Launching workers. 00:41:09.819 ======================================================== 00:41:09.819 Latency(us) 00:41:09.819 Device Information : IOPS MiB/s Average min max 00:41:09.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30020.95 14.66 4263.57 1121.07 11574.95 00:41:09.819 ======================================================== 00:41:09.819 Total : 30020.95 14.66 4263.57 1121.07 11574.95 00:41:09.819 00:41:10.081 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:41:10.081 13:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:41:10.081 true 00:41:10.081 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3692196 00:41:10.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3692196) - No such process 00:41:10.081 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3692196 00:41:10.081 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:10.344 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:41:10.605 null0 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:10.605 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:41:10.866 null1 00:41:10.866 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:10.866 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:10.866 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:41:10.866 null2 00:41:10.866 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:10.866 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:10.866 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:41:11.126 null3 00:41:11.126 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:11.126 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:11.126 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:41:11.387 null4 00:41:11.387 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:11.387 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:11.387 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:41:11.648 null5 00:41:11.648 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:11.648 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:11.648 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:41:11.648 null6 00:41:11.648 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:11.648 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:11.648 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:41:11.910 null7 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.910 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3698458 3698460 3698461 3698463 3698466 3698468 3698470 3698473 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:11.911 13:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.171 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:12.172 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.172 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.172 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:12.432 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:12.692 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.953 13:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:12.953 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:13.213 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:13.473 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:13.474 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:13.734 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.734 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.734 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:13.734 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.735 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:13.994 13:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:13.994 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:13.994 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:13.994 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:13.994 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:14.255 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.516 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.777 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.038 13:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:15.038 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.298 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.558 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.817 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.817 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:15.818 rmmod nvme_tcp 00:41:15.818 rmmod nvme_fabrics 00:41:15.818 rmmod nvme_keyring 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3691589 ']' 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3691589 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3691589 ']' 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3691589 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3691589 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3691589' 00:41:15.818 killing process with pid 3691589 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3691589 00:41:15.818 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3691589 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:16.078 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:18.621 00:41:18.621 real 0m48.874s 00:41:18.621 user 3m2.962s 00:41:18.621 sys 0m22.102s 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:41:18.621 ************************************ 00:41:18.621 END TEST nvmf_ns_hotplug_stress 00:41:18.621 ************************************ 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:18.621 ************************************ 00:41:18.621 START TEST nvmf_delete_subsystem 00:41:18.621 ************************************ 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:41:18.621 * Looking for test storage... 00:41:18.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:18.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.621 --rc genhtml_branch_coverage=1 00:41:18.621 --rc genhtml_function_coverage=1 00:41:18.621 --rc genhtml_legend=1 00:41:18.621 --rc geninfo_all_blocks=1 00:41:18.621 --rc geninfo_unexecuted_blocks=1 00:41:18.621 00:41:18.621 ' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:18.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.621 --rc genhtml_branch_coverage=1 00:41:18.621 --rc genhtml_function_coverage=1 00:41:18.621 --rc genhtml_legend=1 00:41:18.621 --rc geninfo_all_blocks=1 00:41:18.621 --rc geninfo_unexecuted_blocks=1 00:41:18.621 00:41:18.621 ' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:18.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.621 --rc genhtml_branch_coverage=1 00:41:18.621 --rc genhtml_function_coverage=1 00:41:18.621 --rc genhtml_legend=1 00:41:18.621 --rc geninfo_all_blocks=1 00:41:18.621 --rc geninfo_unexecuted_blocks=1 00:41:18.621 00:41:18.621 ' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:18.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.621 --rc genhtml_branch_coverage=1 00:41:18.621 --rc genhtml_function_coverage=1 00:41:18.621 --rc genhtml_legend=1 00:41:18.621 --rc geninfo_all_blocks=1 00:41:18.621 --rc geninfo_unexecuted_blocks=1 00:41:18.621 00:41:18.621 ' 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:18.621 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:41:18.622 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:26.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:26.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:26.756 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:26.756 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:26.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:26.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:26.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:41:26.757 00:41:26.757 --- 10.0.0.2 ping statistics --- 00:41:26.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.757 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:26.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:26.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:41:26.757 00:41:26.757 --- 10.0.0.1 ping statistics --- 00:41:26.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.757 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3703319 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3703319 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3703319 ']' 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:26.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.757 13:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.757 [2024-11-28 13:11:55.822374] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:26.757 [2024-11-28 13:11:55.823357] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:41:26.757 [2024-11-28 13:11:55.823397] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:26.757 [2024-11-28 13:11:55.962347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:26.757 [2024-11-28 13:11:56.020525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:26.757 [2024-11-28 13:11:56.037797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:26.757 [2024-11-28 13:11:56.037830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:26.757 [2024-11-28 13:11:56.037838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:26.757 [2024-11-28 13:11:56.037845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:26.757 [2024-11-28 13:11:56.037850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:26.757 [2024-11-28 13:11:56.039077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.757 [2024-11-28 13:11:56.039080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.757 [2024-11-28 13:11:56.088664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:26.757 [2024-11-28 13:11:56.089317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:26.757 [2024-11-28 13:11:56.089606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.757 [2024-11-28 13:11:56.664022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.757 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.758 [2024-11-28 13:11:56.696512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.758 NULL1 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.758 Delay0 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3703649 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:41:26.758 13:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:41:27.019 [2024-11-28 13:11:56.922947] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:41:28.931 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:28.931 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.932 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 starting I/O failed: -6 00:41:28.932 starting I/O failed: -6 00:41:28.932 starting I/O failed: -6 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 starting I/O failed: -6 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 [2024-11-28 13:11:59.014388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1256100 is same with the state(6) to be set 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.932 Write completed with error (sct=0, sc=8) 00:41:28.932 Read completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Read completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 Write completed with error (sct=0, sc=8) 00:41:28.933 [2024-11-28 13:11:59.014996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618c00d350 is same with the state(6) to be set 00:41:29.874 [2024-11-28 13:11:59.982883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125dbe0 is same with the state(6) to be set 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 [2024-11-28 13:12:00.011012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12562e0 is same with the state(6) to be set 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 [2024-11-28 13:12:00.011368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a1b0 is same with the state(6) to be set 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 [2024-11-28 13:12:00.015071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618c00d800 is same with the state(6) to be set 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Write completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 Read completed with error (sct=0, sc=8) 00:41:30.135 [2024-11-28 13:12:00.015170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618c00d020 is same with the state(6) to be set 00:41:30.135 Initializing NVMe Controllers 00:41:30.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:30.136 Controller IO queue size 128, less than required. 00:41:30.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:30.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:30.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:30.136 Initialization complete. Launching workers. 00:41:30.136 ======================================================== 00:41:30.136 Latency(us) 00:41:30.136 Device Information : IOPS MiB/s Average min max 00:41:30.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.78 0.08 891474.56 516.96 1012040.94 00:41:30.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.82 0.08 912209.56 418.33 1043591.26 00:41:30.136 ======================================================== 00:41:30.136 Total : 335.59 0.16 901658.02 418.33 1043591.26 00:41:30.136 00:41:30.136 [2024-11-28 13:12:00.015723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125dbe0 (9): Bad file descriptor 00:41:30.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:41:30.136 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.136 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:41:30.136 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3703649 00:41:30.136 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:41:30.397 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:41:30.397 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3703649 00:41:30.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3703649) - No such process 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3703649 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3703649 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3703649 00:41:30.658 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:30.659 [2024-11-28 13:12:00.548339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3704362 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:30.659 13:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:30.659 [2024-11-28 13:12:00.744378] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:41:31.231 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:31.231 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:31.231 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:31.491 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:31.491 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:31.491 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:32.064 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:32.064 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:32.064 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:32.635 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:32.635 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:32.635 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:33.207 13:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:33.207 13:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:33.207 13:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:33.778 13:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:33.778 13:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:33.778 13:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:34.039 Initializing NVMe Controllers 00:41:34.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:34.039 Controller IO queue size 128, less than required. 00:41:34.039 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:34.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:34.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:34.039 Initialization complete. Launching workers. 00:41:34.039 ======================================================== 00:41:34.039 Latency(us) 00:41:34.039 Device Information : IOPS MiB/s Average min max 00:41:34.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002167.47 1000049.88 1005469.57 00:41:34.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003729.95 1000028.56 1010052.37 00:41:34.039 ======================================================== 00:41:34.039 Total : 256.00 0.12 1002948.71 1000028.56 1010052.37 00:41:34.039 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3704362 00:41:34.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3704362) - No such process 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3704362 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:34.039 rmmod nvme_tcp 00:41:34.039 rmmod nvme_fabrics 00:41:34.039 rmmod nvme_keyring 00:41:34.039 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3703319 ']' 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3703319 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3703319 ']' 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3703319 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3703319 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3703319' 00:41:34.300 killing process with pid 3703319 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3703319 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3703319 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.300 13:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.845 00:41:36.845 real 0m18.207s 00:41:36.845 user 0m26.556s 00:41:36.845 sys 0m7.373s 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:36.845 ************************************ 00:41:36.845 END TEST nvmf_delete_subsystem 00:41:36.845 ************************************ 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:36.845 ************************************ 00:41:36.845 START TEST nvmf_host_management 00:41:36.845 ************************************ 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:41:36.845 * Looking for test storage... 00:41:36.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.845 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.846 --rc genhtml_branch_coverage=1 00:41:36.846 --rc genhtml_function_coverage=1 00:41:36.846 --rc genhtml_legend=1 00:41:36.846 --rc geninfo_all_blocks=1 00:41:36.846 --rc geninfo_unexecuted_blocks=1 00:41:36.846 00:41:36.846 ' 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.846 --rc genhtml_branch_coverage=1 00:41:36.846 --rc genhtml_function_coverage=1 00:41:36.846 --rc genhtml_legend=1 00:41:36.846 --rc geninfo_all_blocks=1 00:41:36.846 --rc geninfo_unexecuted_blocks=1 00:41:36.846 00:41:36.846 ' 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.846 --rc genhtml_branch_coverage=1 00:41:36.846 --rc genhtml_function_coverage=1 00:41:36.846 --rc genhtml_legend=1 00:41:36.846 --rc geninfo_all_blocks=1 00:41:36.846 --rc geninfo_unexecuted_blocks=1 00:41:36.846 00:41:36.846 ' 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.846 --rc genhtml_branch_coverage=1 00:41:36.846 --rc genhtml_function_coverage=1 00:41:36.846 --rc genhtml_legend=1 00:41:36.846 --rc geninfo_all_blocks=1 00:41:36.846 --rc geninfo_unexecuted_blocks=1 00:41:36.846 00:41:36.846 ' 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.846 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.847 13:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:44.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:44.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:44.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:44.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:44.991 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:44.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:44.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:41:44.992 00:41:44.992 --- 10.0.0.2 ping statistics --- 00:41:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:44.992 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:44.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:44.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:41:44.992 00:41:44.992 --- 10.0.0.1 ping statistics --- 00:41:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:44.992 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:44.992 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3709592 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3709592 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3709592 ']' 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.992 [2024-11-28 13:12:14.091203] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:44.992 [2024-11-28 13:12:14.092214] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:41:44.992 [2024-11-28 13:12:14.092256] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:44.992 [2024-11-28 13:12:14.233213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:44.992 [2024-11-28 13:12:14.292365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:44.992 [2024-11-28 13:12:14.321290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:44.992 [2024-11-28 13:12:14.321340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:44.992 [2024-11-28 13:12:14.321349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:44.992 [2024-11-28 13:12:14.321357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:44.992 [2024-11-28 13:12:14.321364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:44.992 [2024-11-28 13:12:14.323632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:44.992 [2024-11-28 13:12:14.323797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:44.992 [2024-11-28 13:12:14.323954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.992 [2024-11-28 13:12:14.323955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:44.992 [2024-11-28 13:12:14.392785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:44.992 [2024-11-28 13:12:14.393785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:44.992 [2024-11-28 13:12:14.393969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:44.992 [2024-11-28 13:12:14.394291] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:44.992 [2024-11-28 13:12:14.394377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.992 [2024-11-28 13:12:14.944799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.992 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:44.993 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:41:44.993 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:41:44.993 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.993 13:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.993 Malloc0 00:41:44.993 [2024-11-28 13:12:15.041061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3709951 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3709951 /var/tmp/bdevperf.sock 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3709951 ']' 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:44.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:44.993 { 00:41:44.993 "params": { 00:41:44.993 "name": "Nvme$subsystem", 00:41:44.993 "trtype": "$TEST_TRANSPORT", 00:41:44.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:44.993 "adrfam": "ipv4", 00:41:44.993 "trsvcid": "$NVMF_PORT", 00:41:44.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:44.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:44.993 "hdgst": ${hdgst:-false}, 00:41:44.993 "ddgst": ${ddgst:-false} 00:41:44.993 }, 00:41:44.993 "method": "bdev_nvme_attach_controller" 00:41:44.993 } 00:41:44.993 EOF 00:41:44.993 )") 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:44.993 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:44.993 "params": { 00:41:44.993 "name": "Nvme0", 00:41:44.993 "trtype": "tcp", 00:41:44.993 "traddr": "10.0.0.2", 00:41:44.993 "adrfam": "ipv4", 00:41:44.993 "trsvcid": "4420", 00:41:44.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:44.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:44.993 "hdgst": false, 00:41:44.993 "ddgst": false 00:41:44.993 }, 00:41:44.993 "method": "bdev_nvme_attach_controller" 00:41:44.993 }' 00:41:45.254 [2024-11-28 13:12:15.150041] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:41:45.254 [2024-11-28 13:12:15.150104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709951 ] 00:41:45.254 [2024-11-28 13:12:15.284398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:45.254 [2024-11-28 13:12:15.346087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.254 [2024-11-28 13:12:15.364781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.516 Running I/O for 10 seconds... 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.089 13:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=769 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 769 -ge 100 ']' 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.090 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.090 [2024-11-28 13:12:16.021581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.021835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9210 is same with the state(6) to be set 00:41:46.090 [2024-11-28 13:12:16.022123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.090 [2024-11-28 13:12:16.022560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.090 [2024-11-28 13:12:16.022569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.022988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.022998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.091 [2024-11-28 13:12:16.023230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.091 [2024-11-28 13:12:16.023239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.092 [2024-11-28 13:12:16.023248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.092 [2024-11-28 13:12:16.023257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:46.092 [2024-11-28 13:12:16.023265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.092 [2024-11-28 13:12:16.024513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:41:46.092 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.092 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:46.092 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.092 task offset: 114432 on job bdev=Nvme0n1 fails 00:41:46.092 00:41:46.092 Latency(us) 00:41:46.092 [2024-11-28T12:12:16.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:46.092 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:46.092 Job: Nvme0n1 ended in about 0.48 seconds with error 00:41:46.092 Verification LBA range: start 0x0 length 0x400 00:41:46.092 Nvme0n1 : 0.48 1786.93 111.68 132.52 0.00 32384.08 1560.12 35691.17 00:41:46.092 [2024-11-28T12:12:16.219Z] =================================================================================================================== 00:41:46.092 [2024-11-28T12:12:16.219Z] Total : 1786.93 111.68 132.52 0.00 32384.08 1560.12 35691.17 00:41:46.092 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.092 [2024-11-28 13:12:16.026533] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:46.092 [2024-11-28 13:12:16.026557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e3bd0 (9): Bad file descriptor 00:41:46.092 [2024-11-28 13:12:16.027691] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:41:46.092 [2024-11-28 13:12:16.027779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:41:46.092 [2024-11-28 13:12:16.027801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.092 [2024-11-28 13:12:16.027816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:41:46.092 [2024-11-28 13:12:16.027824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:41:46.092 [2024-11-28 13:12:16.027832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.092 [2024-11-28 13:12:16.027839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e3bd0 00:41:46.092 [2024-11-28 13:12:16.027857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e3bd0 (9): Bad file descriptor 00:41:46.092 [2024-11-28 13:12:16.027870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:41:46.092 [2024-11-28 13:12:16.027878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:41:46.092 [2024-11-28 13:12:16.027887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:41:46.092 [2024-11-28 13:12:16.027896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:41:46.092 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.092 13:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3709951 00:41:47.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3709951) - No such process 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:47.035 { 00:41:47.035 "params": { 00:41:47.035 "name": "Nvme$subsystem", 00:41:47.035 "trtype": "$TEST_TRANSPORT", 00:41:47.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.035 "adrfam": "ipv4", 00:41:47.035 "trsvcid": "$NVMF_PORT", 00:41:47.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.035 "hdgst": ${hdgst:-false}, 00:41:47.035 "ddgst": ${ddgst:-false} 00:41:47.035 }, 00:41:47.035 "method": "bdev_nvme_attach_controller" 00:41:47.035 } 00:41:47.035 EOF 00:41:47.035 )") 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:47.035 13:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:47.035 "params": { 00:41:47.035 "name": "Nvme0", 00:41:47.035 "trtype": "tcp", 00:41:47.035 "traddr": "10.0.0.2", 00:41:47.035 "adrfam": "ipv4", 00:41:47.035 "trsvcid": "4420", 00:41:47.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:47.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:47.035 "hdgst": false, 00:41:47.035 "ddgst": false 00:41:47.035 }, 00:41:47.035 "method": "bdev_nvme_attach_controller" 00:41:47.035 }' 00:41:47.035 [2024-11-28 13:12:17.103395] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:41:47.035 [2024-11-28 13:12:17.103470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710301 ] 00:41:47.296 [2024-11-28 13:12:17.240585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:47.296 [2024-11-28 13:12:17.299971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:47.296 [2024-11-28 13:12:17.326599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.558 Running I/O for 1 seconds... 00:41:48.497 2125.00 IOPS, 132.81 MiB/s 00:41:48.497 Latency(us) 00:41:48.497 [2024-11-28T12:12:18.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:48.497 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:48.497 Verification LBA range: start 0x0 length 0x400 00:41:48.497 Nvme0n1 : 1.02 2157.76 134.86 0.00 0.00 28957.84 3722.39 30217.07 00:41:48.497 [2024-11-28T12:12:18.624Z] =================================================================================================================== 00:41:48.497 [2024-11-28T12:12:18.624Z] Total : 2157.76 134.86 0.00 0.00 28957.84 3722.39 30217.07 00:41:48.497 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:41:48.497 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:41:48.497 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:41:48.497 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:48.758 rmmod nvme_tcp 00:41:48.758 rmmod nvme_fabrics 00:41:48.758 rmmod nvme_keyring 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3709592 ']' 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3709592 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3709592 ']' 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3709592 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709592 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709592' 00:41:48.758 killing process with pid 3709592 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3709592 00:41:48.758 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3709592 00:41:48.758 [2024-11-28 13:12:18.857500] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:49.020 13:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:50.934 13:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:50.934 13:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:41:50.934 00:41:50.934 real 0m14.465s 00:41:50.934 user 0m19.040s 00:41:50.934 sys 0m7.276s 00:41:50.934 13:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:50.934 13:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:50.934 ************************************ 00:41:50.934 END TEST nvmf_host_management 00:41:50.934 ************************************ 00:41:50.934 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:50.934 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:50.934 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:50.934 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:50.934 ************************************ 00:41:50.934 START TEST nvmf_lvol 00:41:50.934 ************************************ 00:41:50.934 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:51.195 * Looking for test storage... 00:41:51.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:51.195 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:51.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.196 --rc genhtml_branch_coverage=1 00:41:51.196 --rc genhtml_function_coverage=1 00:41:51.196 --rc genhtml_legend=1 00:41:51.196 --rc geninfo_all_blocks=1 00:41:51.196 --rc geninfo_unexecuted_blocks=1 00:41:51.196 00:41:51.196 ' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:51.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.196 --rc genhtml_branch_coverage=1 00:41:51.196 --rc genhtml_function_coverage=1 00:41:51.196 --rc genhtml_legend=1 00:41:51.196 --rc geninfo_all_blocks=1 00:41:51.196 --rc geninfo_unexecuted_blocks=1 00:41:51.196 00:41:51.196 ' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:51.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.196 --rc genhtml_branch_coverage=1 00:41:51.196 --rc genhtml_function_coverage=1 00:41:51.196 --rc genhtml_legend=1 00:41:51.196 --rc geninfo_all_blocks=1 00:41:51.196 --rc geninfo_unexecuted_blocks=1 00:41:51.196 00:41:51.196 ' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:51.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.196 --rc genhtml_branch_coverage=1 00:41:51.196 --rc genhtml_function_coverage=1 00:41:51.196 --rc genhtml_legend=1 00:41:51.196 --rc geninfo_all_blocks=1 00:41:51.196 --rc geninfo_unexecuted_blocks=1 00:41:51.196 00:41:51.196 ' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:41:51.196 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:59.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:59.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:59.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:59.339 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:59.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:59.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:59.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:41:59.340 00:41:59.340 --- 10.0.0.2 ping statistics --- 00:41:59.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.340 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:59.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:59.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:41:59.340 00:41:59.340 --- 10.0.0.1 ping statistics --- 00:41:59.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.340 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3714645 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3714645 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3714645 ']' 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:59.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:59.340 13:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:59.340 [2024-11-28 13:12:28.631382] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:59.340 [2024-11-28 13:12:28.632349] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:41:59.340 [2024-11-28 13:12:28.632387] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:59.340 [2024-11-28 13:12:28.771640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:59.340 [2024-11-28 13:12:28.832272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:59.340 [2024-11-28 13:12:28.850140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:59.340 [2024-11-28 13:12:28.850176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:59.340 [2024-11-28 13:12:28.850184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:59.340 [2024-11-28 13:12:28.850191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:59.340 [2024-11-28 13:12:28.850197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:59.340 [2024-11-28 13:12:28.851561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:59.340 [2024-11-28 13:12:28.851711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:59.340 [2024-11-28 13:12:28.851713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:59.340 [2024-11-28 13:12:28.901563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:59.340 [2024-11-28 13:12:28.902406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:59.340 [2024-11-28 13:12:28.902974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:59.340 [2024-11-28 13:12:28.903129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:59.340 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:59.602 [2024-11-28 13:12:29.616567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:59.602 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:59.863 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:59.863 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:00.125 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:42:00.125 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:42:00.387 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:42:00.387 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=449c1dce-4cb4-4250-95b0-aaeee25d80b9 00:42:00.387 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 449c1dce-4cb4-4250-95b0-aaeee25d80b9 lvol 20 00:42:00.693 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3083867b-42d5-4610-923c-51b865b9f418 00:42:00.693 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:00.693 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3083867b-42d5-4610-923c-51b865b9f418 00:42:01.006 13:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:01.006 [2024-11-28 13:12:31.104328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.317 13:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:01.317 13:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3715338 00:42:01.317 13:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:42:01.317 13:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:42:02.288 13:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3083867b-42d5-4610-923c-51b865b9f418 MY_SNAPSHOT 00:42:02.548 13:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9faa8ad5-350b-4896-bfa5-453f2b4aa32a 00:42:02.548 13:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3083867b-42d5-4610-923c-51b865b9f418 30 00:42:02.809 13:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9faa8ad5-350b-4896-bfa5-453f2b4aa32a MY_CLONE 00:42:03.070 13:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=221418a8-b320-4356-8bde-99a24d6464d5 00:42:03.070 13:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 221418a8-b320-4356-8bde-99a24d6464d5 00:42:03.330 13:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3715338 00:42:13.326 Initializing NVMe Controllers 00:42:13.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:42:13.326 Controller IO queue size 128, less than required. 00:42:13.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:13.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:42:13.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:42:13.326 Initialization complete. Launching workers. 00:42:13.326 ======================================================== 00:42:13.326 Latency(us) 00:42:13.326 Device Information : IOPS MiB/s Average min max 00:42:13.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15344.00 59.94 8344.10 1572.28 76198.78 00:42:13.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15405.20 60.18 8309.32 3966.94 55796.48 00:42:13.326 ======================================================== 00:42:13.326 Total : 30749.20 120.11 8326.67 1572.28 76198.78 00:42:13.326 00:42:13.326 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:13.326 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3083867b-42d5-4610-923c-51b865b9f418 00:42:13.326 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 449c1dce-4cb4-4250-95b0-aaeee25d80b9 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:13.326 rmmod nvme_tcp 00:42:13.326 rmmod nvme_fabrics 00:42:13.326 rmmod nvme_keyring 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3714645 ']' 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3714645 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3714645 ']' 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3714645 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714645 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714645' 00:42:13.326 killing process with pid 3714645 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3714645 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3714645 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:42:13.326 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:13.327 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:42:13.327 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:13.327 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:13.327 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.327 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:13.327 13:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:14.710 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:14.710 00:42:14.710 real 0m23.411s 00:42:14.711 user 0m55.274s 00:42:14.711 sys 0m10.507s 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:42:14.711 ************************************ 00:42:14.711 END TEST nvmf_lvol 00:42:14.711 ************************************ 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:14.711 ************************************ 00:42:14.711 START TEST nvmf_lvs_grow 00:42:14.711 ************************************ 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:42:14.711 * Looking for test storage... 00:42:14.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.711 --rc genhtml_branch_coverage=1 00:42:14.711 --rc genhtml_function_coverage=1 00:42:14.711 --rc genhtml_legend=1 00:42:14.711 --rc geninfo_all_blocks=1 00:42:14.711 --rc geninfo_unexecuted_blocks=1 00:42:14.711 00:42:14.711 ' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.711 --rc genhtml_branch_coverage=1 00:42:14.711 --rc genhtml_function_coverage=1 00:42:14.711 --rc genhtml_legend=1 00:42:14.711 --rc geninfo_all_blocks=1 00:42:14.711 --rc geninfo_unexecuted_blocks=1 00:42:14.711 00:42:14.711 ' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.711 --rc genhtml_branch_coverage=1 00:42:14.711 --rc genhtml_function_coverage=1 00:42:14.711 --rc genhtml_legend=1 00:42:14.711 --rc geninfo_all_blocks=1 00:42:14.711 --rc geninfo_unexecuted_blocks=1 00:42:14.711 00:42:14.711 ' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.711 --rc genhtml_branch_coverage=1 00:42:14.711 --rc genhtml_function_coverage=1 00:42:14.711 --rc genhtml_legend=1 00:42:14.711 --rc geninfo_all_blocks=1 00:42:14.711 --rc geninfo_unexecuted_blocks=1 00:42:14.711 00:42:14.711 ' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.711 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:42:14.712 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:22.878 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:22.878 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:42:22.878 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:22.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:22.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:22.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:22.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:22.879 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:22.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:22.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:42:22.880 00:42:22.880 --- 10.0.0.2 ping statistics --- 00:42:22.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:22.880 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:22.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:22.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:42:22.880 00:42:22.880 --- 10.0.0.1 ping statistics --- 00:42:22.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:22.880 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:22.880 13:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3721364 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3721364 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3721364 ']' 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:22.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:22.880 [2024-11-28 13:12:52.084416] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:22.880 [2024-11-28 13:12:52.085389] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:42:22.880 [2024-11-28 13:12:52.085427] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:22.880 [2024-11-28 13:12:52.224037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:22.880 [2024-11-28 13:12:52.280786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.880 [2024-11-28 13:12:52.297754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:22.880 [2024-11-28 13:12:52.297787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:22.880 [2024-11-28 13:12:52.297795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:22.880 [2024-11-28 13:12:52.297801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:22.880 [2024-11-28 13:12:52.297807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:22.880 [2024-11-28 13:12:52.298345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.880 [2024-11-28 13:12:52.347483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:22.880 [2024-11-28 13:12:52.347735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:22.880 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:23.141 [2024-11-28 13:12:53.075119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:23.141 ************************************ 00:42:23.141 START TEST lvs_grow_clean 00:42:23.141 ************************************ 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:23.141 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:23.401 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:23.401 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:23.661 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:23.661 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:23.661 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:23.661 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:23.661 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:23.661 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f34000ec-7877-445f-91c6-d7ca8e9c2370 lvol 150 00:42:23.921 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2614b3a-a301-458c-8792-dc20049e5f0d 00:42:23.921 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:23.921 13:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:24.181 [2024-11-28 13:12:54.054768] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:24.181 [2024-11-28 13:12:54.054910] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:24.181 true 00:42:24.181 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:24.181 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:24.181 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:24.181 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:24.441 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2614b3a-a301-458c-8792-dc20049e5f0d 00:42:24.441 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:24.701 [2024-11-28 13:12:54.695356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:24.701 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3721979 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3721979 /var/tmp/bdevperf.sock 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3721979 ']' 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:24.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:24.961 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:24.961 [2024-11-28 13:12:54.930034] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:42:24.961 [2024-11-28 13:12:54.930091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721979 ] 00:42:24.961 [2024-11-28 13:12:55.062634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:25.220 [2024-11-28 13:12:55.119944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.220 [2024-11-28 13:12:55.138473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.792 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:25.792 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:42:25.792 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:26.053 Nvme0n1 00:42:26.053 13:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:26.314 [ 00:42:26.314 { 00:42:26.314 "name": "Nvme0n1", 00:42:26.314 "aliases": [ 00:42:26.314 "c2614b3a-a301-458c-8792-dc20049e5f0d" 00:42:26.314 ], 00:42:26.314 "product_name": "NVMe disk", 00:42:26.314 "block_size": 4096, 00:42:26.314 "num_blocks": 38912, 00:42:26.314 "uuid": "c2614b3a-a301-458c-8792-dc20049e5f0d", 00:42:26.314 "numa_id": 0, 00:42:26.314 "assigned_rate_limits": { 00:42:26.314 "rw_ios_per_sec": 0, 00:42:26.314 "rw_mbytes_per_sec": 0, 00:42:26.314 "r_mbytes_per_sec": 0, 00:42:26.314 "w_mbytes_per_sec": 0 00:42:26.314 }, 00:42:26.314 "claimed": false, 00:42:26.314 "zoned": false, 00:42:26.314 "supported_io_types": { 00:42:26.314 "read": true, 00:42:26.314 "write": true, 00:42:26.314 "unmap": true, 00:42:26.314 "flush": true, 00:42:26.314 "reset": true, 00:42:26.314 "nvme_admin": true, 00:42:26.314 "nvme_io": true, 00:42:26.314 "nvme_io_md": false, 00:42:26.314 "write_zeroes": true, 00:42:26.314 "zcopy": false, 00:42:26.314 "get_zone_info": false, 00:42:26.314 "zone_management": false, 00:42:26.314 "zone_append": false, 00:42:26.314 "compare": true, 00:42:26.314 "compare_and_write": true, 00:42:26.314 "abort": true, 00:42:26.314 "seek_hole": false, 00:42:26.314 "seek_data": false, 00:42:26.314 "copy": true, 00:42:26.314 "nvme_iov_md": false 00:42:26.314 }, 00:42:26.314 "memory_domains": [ 00:42:26.314 { 00:42:26.314 "dma_device_id": "system", 00:42:26.314 "dma_device_type": 1 00:42:26.314 } 00:42:26.314 ], 00:42:26.314 "driver_specific": { 00:42:26.314 "nvme": [ 00:42:26.314 { 00:42:26.314 "trid": { 00:42:26.314 "trtype": "TCP", 00:42:26.314 "adrfam": "IPv4", 00:42:26.314 "traddr": "10.0.0.2", 00:42:26.314 "trsvcid": "4420", 00:42:26.314 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:26.314 }, 00:42:26.314 "ctrlr_data": { 00:42:26.314 "cntlid": 1, 00:42:26.314 "vendor_id": "0x8086", 00:42:26.314 "model_number": "SPDK bdev Controller", 00:42:26.314 "serial_number": "SPDK0", 00:42:26.314 "firmware_revision": "25.01", 00:42:26.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:26.314 "oacs": { 00:42:26.314 "security": 0, 00:42:26.314 "format": 0, 00:42:26.314 "firmware": 0, 00:42:26.314 "ns_manage": 0 00:42:26.314 }, 00:42:26.314 "multi_ctrlr": true, 00:42:26.314 "ana_reporting": false 00:42:26.314 }, 00:42:26.314 "vs": { 00:42:26.314 "nvme_version": "1.3" 00:42:26.314 }, 00:42:26.314 "ns_data": { 00:42:26.314 "id": 1, 00:42:26.314 "can_share": true 00:42:26.314 } 00:42:26.314 } 00:42:26.314 ], 00:42:26.314 "mp_policy": "active_passive" 00:42:26.314 } 00:42:26.314 } 00:42:26.314 ] 00:42:26.314 13:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3722133 00:42:26.315 13:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:26.315 13:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:26.315 Running I/O for 10 seconds... 00:42:27.696 Latency(us) 00:42:27.696 [2024-11-28T12:12:57.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:27.696 Nvme0n1 : 1.00 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:42:27.696 [2024-11-28T12:12:57.823Z] =================================================================================================================== 00:42:27.696 [2024-11-28T12:12:57.823Z] Total : 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:42:27.696 00:42:28.269 13:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:28.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:28.269 Nvme0n1 : 2.00 17431.00 68.09 0.00 0.00 0.00 0.00 0.00 00:42:28.269 [2024-11-28T12:12:58.396Z] =================================================================================================================== 00:42:28.269 [2024-11-28T12:12:58.396Z] Total : 17431.00 68.09 0.00 0.00 0.00 0.00 0.00 00:42:28.269 00:42:28.530 true 00:42:28.530 13:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:28.530 13:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:28.791 13:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:28.791 13:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:28.791 13:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3722133 00:42:29.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:29.363 Nvme0n1 : 3.00 17568.33 68.63 0.00 0.00 0.00 0.00 0.00 00:42:29.363 [2024-11-28T12:12:59.490Z] =================================================================================================================== 00:42:29.363 [2024-11-28T12:12:59.490Z] Total : 17568.33 68.63 0.00 0.00 0.00 0.00 0.00 00:42:29.363 00:42:30.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:30.304 Nvme0n1 : 4.00 18081.75 70.63 0.00 0.00 0.00 0.00 0.00 00:42:30.304 [2024-11-28T12:13:00.431Z] =================================================================================================================== 00:42:30.304 [2024-11-28T12:13:00.431Z] Total : 18081.75 70.63 0.00 0.00 0.00 0.00 0.00 00:42:30.304 00:42:31.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:31.687 Nvme0n1 : 5.00 19517.00 76.24 0.00 0.00 0.00 0.00 0.00 00:42:31.687 [2024-11-28T12:13:01.814Z] =================================================================================================================== 00:42:31.687 [2024-11-28T12:13:01.814Z] Total : 19517.00 76.24 0.00 0.00 0.00 0.00 0.00 00:42:31.687 00:42:32.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:32.628 Nvme0n1 : 6.00 20476.33 79.99 0.00 0.00 0.00 0.00 0.00 00:42:32.628 [2024-11-28T12:13:02.755Z] =================================================================================================================== 00:42:32.628 [2024-11-28T12:13:02.755Z] Total : 20476.33 79.99 0.00 0.00 0.00 0.00 0.00 00:42:32.628 00:42:33.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:33.569 Nvme0n1 : 7.00 21170.71 82.70 0.00 0.00 0.00 0.00 0.00 00:42:33.569 [2024-11-28T12:13:03.696Z] =================================================================================================================== 00:42:33.569 [2024-11-28T12:13:03.696Z] Total : 21170.71 82.70 0.00 0.00 0.00 0.00 0.00 00:42:33.569 00:42:34.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:34.510 Nvme0n1 : 8.00 21697.50 84.76 0.00 0.00 0.00 0.00 0.00 00:42:34.510 [2024-11-28T12:13:04.637Z] =================================================================================================================== 00:42:34.510 [2024-11-28T12:13:04.637Z] Total : 21697.50 84.76 0.00 0.00 0.00 0.00 0.00 00:42:34.510 00:42:35.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:35.450 Nvme0n1 : 9.00 22094.78 86.31 0.00 0.00 0.00 0.00 0.00 00:42:35.450 [2024-11-28T12:13:05.577Z] =================================================================================================================== 00:42:35.450 [2024-11-28T12:13:05.577Z] Total : 22094.78 86.31 0.00 0.00 0.00 0.00 0.00 00:42:35.450 00:42:36.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:36.392 Nvme0n1 : 10.00 22425.30 87.60 0.00 0.00 0.00 0.00 0.00 00:42:36.392 [2024-11-28T12:13:06.519Z] =================================================================================================================== 00:42:36.392 [2024-11-28T12:13:06.519Z] Total : 22425.30 87.60 0.00 0.00 0.00 0.00 0.00 00:42:36.392 00:42:36.392 00:42:36.392 Latency(us) 00:42:36.392 [2024-11-28T12:13:06.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:36.392 Nvme0n1 : 10.00 22423.62 87.59 0.00 0.00 5704.66 2969.70 33063.60 00:42:36.392 [2024-11-28T12:13:06.519Z] =================================================================================================================== 00:42:36.392 [2024-11-28T12:13:06.519Z] Total : 22423.62 87.59 0.00 0.00 5704.66 2969.70 33063.60 00:42:36.392 { 00:42:36.392 "results": [ 00:42:36.392 { 00:42:36.392 "job": "Nvme0n1", 00:42:36.392 "core_mask": "0x2", 00:42:36.392 "workload": "randwrite", 00:42:36.392 "status": "finished", 00:42:36.392 "queue_depth": 128, 00:42:36.392 "io_size": 4096, 00:42:36.392 "runtime": 10.003646, 00:42:36.392 "iops": 22423.624346563243, 00:42:36.392 "mibps": 87.59228260376267, 00:42:36.392 "io_failed": 0, 00:42:36.392 "io_timeout": 0, 00:42:36.392 "avg_latency_us": 5704.660920566125, 00:42:36.392 "min_latency_us": 2969.702639492148, 00:42:36.392 "max_latency_us": 33063.60173738724 00:42:36.392 } 00:42:36.392 ], 00:42:36.392 "core_count": 1 00:42:36.392 } 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3721979 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3721979 ']' 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3721979 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3721979 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3721979' 00:42:36.392 killing process with pid 3721979 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3721979 00:42:36.392 Received shutdown signal, test time was about 10.000000 seconds 00:42:36.392 00:42:36.392 Latency(us) 00:42:36.392 [2024-11-28T12:13:06.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.392 [2024-11-28T12:13:06.519Z] =================================================================================================================== 00:42:36.392 [2024-11-28T12:13:06.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:36.392 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3721979 00:42:36.653 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:36.653 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:36.913 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:36.913 13:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:37.173 [2024-11-28 13:13:07.242851] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:37.173 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:37.433 request: 00:42:37.433 { 00:42:37.433 "uuid": "f34000ec-7877-445f-91c6-d7ca8e9c2370", 00:42:37.433 "method": "bdev_lvol_get_lvstores", 00:42:37.433 "req_id": 1 00:42:37.433 } 00:42:37.433 Got JSON-RPC error response 00:42:37.433 response: 00:42:37.433 { 00:42:37.433 "code": -19, 00:42:37.433 "message": "No such device" 00:42:37.433 } 00:42:37.433 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:42:37.433 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:37.433 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:37.433 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:37.433 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:37.693 aio_bdev 00:42:37.693 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2614b3a-a301-458c-8792-dc20049e5f0d 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c2614b3a-a301-458c-8792-dc20049e5f0d 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:37.694 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2614b3a-a301-458c-8792-dc20049e5f0d -t 2000 00:42:37.954 [ 00:42:37.954 { 00:42:37.954 "name": "c2614b3a-a301-458c-8792-dc20049e5f0d", 00:42:37.954 "aliases": [ 00:42:37.954 "lvs/lvol" 00:42:37.954 ], 00:42:37.954 "product_name": "Logical Volume", 00:42:37.954 "block_size": 4096, 00:42:37.954 "num_blocks": 38912, 00:42:37.954 "uuid": "c2614b3a-a301-458c-8792-dc20049e5f0d", 00:42:37.954 "assigned_rate_limits": { 00:42:37.954 "rw_ios_per_sec": 0, 00:42:37.954 "rw_mbytes_per_sec": 0, 00:42:37.954 "r_mbytes_per_sec": 0, 00:42:37.954 "w_mbytes_per_sec": 0 00:42:37.954 }, 00:42:37.954 "claimed": false, 00:42:37.954 "zoned": false, 00:42:37.954 "supported_io_types": { 00:42:37.954 "read": true, 00:42:37.954 "write": true, 00:42:37.954 "unmap": true, 00:42:37.954 "flush": false, 00:42:37.954 "reset": true, 00:42:37.954 "nvme_admin": false, 00:42:37.954 "nvme_io": false, 00:42:37.954 "nvme_io_md": false, 00:42:37.954 "write_zeroes": true, 00:42:37.954 "zcopy": false, 00:42:37.954 "get_zone_info": false, 00:42:37.954 "zone_management": false, 00:42:37.954 "zone_append": false, 00:42:37.954 "compare": false, 00:42:37.954 "compare_and_write": false, 00:42:37.954 "abort": false, 00:42:37.954 "seek_hole": true, 00:42:37.954 "seek_data": true, 00:42:37.954 "copy": false, 00:42:37.954 "nvme_iov_md": false 00:42:37.954 }, 00:42:37.954 "driver_specific": { 00:42:37.954 "lvol": { 00:42:37.954 "lvol_store_uuid": "f34000ec-7877-445f-91c6-d7ca8e9c2370", 00:42:37.954 "base_bdev": "aio_bdev", 00:42:37.954 "thin_provision": false, 00:42:37.954 "num_allocated_clusters": 38, 00:42:37.954 "snapshot": false, 00:42:37.954 "clone": false, 00:42:37.954 "esnap_clone": false 00:42:37.954 } 00:42:37.954 } 00:42:37.954 } 00:42:37.954 ] 00:42:37.954 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:42:37.954 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:37.954 13:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:38.215 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:38.215 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:38.215 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:38.215 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:38.215 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2614b3a-a301-458c-8792-dc20049e5f0d 00:42:38.475 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f34000ec-7877-445f-91c6-d7ca8e9c2370 00:42:38.736 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:38.736 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:38.997 00:42:38.997 real 0m15.722s 00:42:38.997 user 0m15.337s 00:42:38.997 sys 0m1.405s 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:38.997 ************************************ 00:42:38.997 END TEST lvs_grow_clean 00:42:38.997 ************************************ 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:38.997 ************************************ 00:42:38.997 START TEST lvs_grow_dirty 00:42:38.997 ************************************ 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:38.997 13:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:39.258 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:39.258 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:39.258 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:39.258 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:39.258 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:39.518 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:39.518 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:39.518 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 lvol 150 00:42:39.779 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:39.779 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:39.779 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:39.779 [2024-11-28 13:13:09.850778] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:39.779 [2024-11-28 13:13:09.850920] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:39.779 true 00:42:39.779 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:39.779 13:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:40.039 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:40.039 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:40.298 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:40.298 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:40.559 [2024-11-28 13:13:10.527313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:40.559 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3724844 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3724844 /var/tmp/bdevperf.sock 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3724844 ']' 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:40.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:40.819 13:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:40.819 [2024-11-28 13:13:10.743085] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:42:40.819 [2024-11-28 13:13:10.743136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724844 ] 00:42:40.820 [2024-11-28 13:13:10.875814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:40.820 [2024-11-28 13:13:10.930603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.080 [2024-11-28 13:13:10.947303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:41.650 13:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:41.650 13:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:42:41.650 13:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:41.910 Nvme0n1 00:42:41.910 13:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:42.171 [ 00:42:42.171 { 00:42:42.171 "name": "Nvme0n1", 00:42:42.171 "aliases": [ 00:42:42.171 "13c695ed-45cb-4ef6-8d5c-870ed2ed2484" 00:42:42.171 ], 00:42:42.171 "product_name": "NVMe disk", 00:42:42.171 "block_size": 4096, 00:42:42.171 "num_blocks": 38912, 00:42:42.171 "uuid": "13c695ed-45cb-4ef6-8d5c-870ed2ed2484", 00:42:42.171 "numa_id": 0, 00:42:42.171 "assigned_rate_limits": { 00:42:42.171 "rw_ios_per_sec": 0, 00:42:42.171 "rw_mbytes_per_sec": 0, 00:42:42.171 "r_mbytes_per_sec": 0, 00:42:42.171 "w_mbytes_per_sec": 0 00:42:42.171 }, 00:42:42.171 "claimed": false, 00:42:42.171 "zoned": false, 00:42:42.171 "supported_io_types": { 00:42:42.171 "read": true, 00:42:42.171 "write": true, 00:42:42.171 "unmap": true, 00:42:42.171 "flush": true, 00:42:42.171 "reset": true, 00:42:42.171 "nvme_admin": true, 00:42:42.171 "nvme_io": true, 00:42:42.171 "nvme_io_md": false, 00:42:42.171 "write_zeroes": true, 00:42:42.171 "zcopy": false, 00:42:42.171 "get_zone_info": false, 00:42:42.171 "zone_management": false, 00:42:42.171 "zone_append": false, 00:42:42.171 "compare": true, 00:42:42.171 "compare_and_write": true, 00:42:42.171 "abort": true, 00:42:42.171 "seek_hole": false, 00:42:42.171 "seek_data": false, 00:42:42.171 "copy": true, 00:42:42.171 "nvme_iov_md": false 00:42:42.171 }, 00:42:42.171 "memory_domains": [ 00:42:42.171 { 00:42:42.171 "dma_device_id": "system", 00:42:42.171 "dma_device_type": 1 00:42:42.171 } 00:42:42.171 ], 00:42:42.171 "driver_specific": { 00:42:42.171 "nvme": [ 00:42:42.171 { 00:42:42.171 "trid": { 00:42:42.171 "trtype": "TCP", 00:42:42.171 "adrfam": "IPv4", 00:42:42.171 "traddr": "10.0.0.2", 00:42:42.171 "trsvcid": "4420", 00:42:42.171 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:42.171 }, 00:42:42.171 "ctrlr_data": { 00:42:42.171 "cntlid": 1, 00:42:42.171 "vendor_id": "0x8086", 00:42:42.171 "model_number": "SPDK bdev Controller", 00:42:42.171 "serial_number": "SPDK0", 00:42:42.171 "firmware_revision": "25.01", 00:42:42.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.171 "oacs": { 00:42:42.171 "security": 0, 00:42:42.171 "format": 0, 00:42:42.171 "firmware": 0, 00:42:42.171 "ns_manage": 0 00:42:42.171 }, 00:42:42.171 "multi_ctrlr": true, 00:42:42.171 "ana_reporting": false 00:42:42.171 }, 00:42:42.171 "vs": { 00:42:42.171 "nvme_version": "1.3" 00:42:42.171 }, 00:42:42.171 "ns_data": { 00:42:42.171 "id": 1, 00:42:42.171 "can_share": true 00:42:42.171 } 00:42:42.171 } 00:42:42.171 ], 00:42:42.171 "mp_policy": "active_passive" 00:42:42.171 } 00:42:42.171 } 00:42:42.171 ] 00:42:42.171 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3725164 00:42:42.171 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:42.171 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:42.171 Running I/O for 10 seconds... 00:42:43.114 Latency(us) 00:42:43.114 [2024-11-28T12:13:13.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:43.114 Nvme0n1 : 1.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:42:43.114 [2024-11-28T12:13:13.241Z] =================================================================================================================== 00:42:43.114 [2024-11-28T12:13:13.241Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:42:43.114 00:42:44.059 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:44.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:44.320 Nvme0n1 : 2.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:42:44.320 [2024-11-28T12:13:14.447Z] =================================================================================================================== 00:42:44.320 [2024-11-28T12:13:14.447Z] Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:42:44.320 00:42:44.320 true 00:42:44.320 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:44.320 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:44.581 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:44.581 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:44.581 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3725164 00:42:45.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:45.163 Nvme0n1 : 3.00 17728.00 69.25 0.00 0.00 0.00 0.00 0.00 00:42:45.163 [2024-11-28T12:13:15.290Z] =================================================================================================================== 00:42:45.163 [2024-11-28T12:13:15.290Z] Total : 17728.00 69.25 0.00 0.00 0.00 0.00 0.00 00:42:45.163 00:42:46.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:46.107 Nvme0n1 : 4.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:42:46.107 [2024-11-28T12:13:16.234Z] =================================================================================================================== 00:42:46.107 [2024-11-28T12:13:16.234Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:42:46.107 00:42:47.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:47.492 Nvme0n1 : 5.00 19024.60 74.31 0.00 0.00 0.00 0.00 0.00 00:42:47.492 [2024-11-28T12:13:17.619Z] =================================================================================================================== 00:42:47.492 [2024-11-28T12:13:17.619Z] Total : 19024.60 74.31 0.00 0.00 0.00 0.00 0.00 00:42:47.492 00:42:48.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:48.432 Nvme0n1 : 6.00 20066.00 78.38 0.00 0.00 0.00 0.00 0.00 00:42:48.432 [2024-11-28T12:13:18.559Z] =================================================================================================================== 00:42:48.432 [2024-11-28T12:13:18.559Z] Total : 20066.00 78.38 0.00 0.00 0.00 0.00 0.00 00:42:48.432 00:42:49.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:49.379 Nvme0n1 : 7.00 20814.71 81.31 0.00 0.00 0.00 0.00 0.00 00:42:49.379 [2024-11-28T12:13:19.506Z] =================================================================================================================== 00:42:49.379 [2024-11-28T12:13:19.506Z] Total : 20814.71 81.31 0.00 0.00 0.00 0.00 0.00 00:42:49.379 00:42:50.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:50.320 Nvme0n1 : 8.00 21376.25 83.50 0.00 0.00 0.00 0.00 0.00 00:42:50.320 [2024-11-28T12:13:20.447Z] =================================================================================================================== 00:42:50.320 [2024-11-28T12:13:20.447Z] Total : 21376.25 83.50 0.00 0.00 0.00 0.00 0.00 00:42:50.320 00:42:51.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:51.310 Nvme0n1 : 9.00 21823.33 85.25 0.00 0.00 0.00 0.00 0.00 00:42:51.310 [2024-11-28T12:13:21.437Z] =================================================================================================================== 00:42:51.310 [2024-11-28T12:13:21.437Z] Total : 21823.33 85.25 0.00 0.00 0.00 0.00 0.00 00:42:51.310 00:42:52.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:52.253 Nvme0n1 : 10.00 22168.30 86.59 0.00 0.00 0.00 0.00 0.00 00:42:52.253 [2024-11-28T12:13:22.380Z] =================================================================================================================== 00:42:52.253 [2024-11-28T12:13:22.380Z] Total : 22168.30 86.59 0.00 0.00 0.00 0.00 0.00 00:42:52.253 00:42:52.253 00:42:52.253 Latency(us) 00:42:52.253 [2024-11-28T12:13:22.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:52.253 Nvme0n1 : 10.00 22173.41 86.61 0.00 0.00 5770.15 3010.76 31092.92 00:42:52.253 [2024-11-28T12:13:22.380Z] =================================================================================================================== 00:42:52.253 [2024-11-28T12:13:22.380Z] Total : 22173.41 86.61 0.00 0.00 5770.15 3010.76 31092.92 00:42:52.253 { 00:42:52.253 "results": [ 00:42:52.253 { 00:42:52.253 "job": "Nvme0n1", 00:42:52.253 "core_mask": "0x2", 00:42:52.253 "workload": "randwrite", 00:42:52.253 "status": "finished", 00:42:52.253 "queue_depth": 128, 00:42:52.253 "io_size": 4096, 00:42:52.253 "runtime": 10.00347, 00:42:52.253 "iops": 22173.405828177623, 00:42:52.253 "mibps": 86.61486651631884, 00:42:52.253 "io_failed": 0, 00:42:52.253 "io_timeout": 0, 00:42:52.253 "avg_latency_us": 5770.146311626711, 00:42:52.253 "min_latency_us": 3010.7584363514866, 00:42:52.253 "max_latency_us": 31092.92348813899 00:42:52.253 } 00:42:52.253 ], 00:42:52.253 "core_count": 1 00:42:52.253 } 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3724844 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3724844 ']' 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3724844 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724844 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724844' 00:42:52.253 killing process with pid 3724844 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3724844 00:42:52.253 Received shutdown signal, test time was about 10.000000 seconds 00:42:52.253 00:42:52.253 Latency(us) 00:42:52.253 [2024-11-28T12:13:22.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.253 [2024-11-28T12:13:22.380Z] =================================================================================================================== 00:42:52.253 [2024-11-28T12:13:22.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:52.253 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3724844 00:42:52.516 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:52.516 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:52.777 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:52.777 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:52.777 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:52.777 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:42:52.777 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3721364 00:42:52.777 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3721364 00:42:53.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3721364 Killed "${NVMF_APP[@]}" "$@" 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3727181 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3727181 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3727181 ']' 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:53.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:53.039 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:53.039 [2024-11-28 13:13:22.995027] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:53.039 [2024-11-28 13:13:22.996087] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:42:53.039 [2024-11-28 13:13:22.996133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:53.039 [2024-11-28 13:13:23.137509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:53.300 [2024-11-28 13:13:23.190966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:53.300 [2024-11-28 13:13:23.212008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:53.300 [2024-11-28 13:13:23.212049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:53.300 [2024-11-28 13:13:23.212056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:53.300 [2024-11-28 13:13:23.212066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:53.300 [2024-11-28 13:13:23.212071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:53.300 [2024-11-28 13:13:23.212717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.300 [2024-11-28 13:13:23.264205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:53.300 [2024-11-28 13:13:23.264401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:53.872 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:54.132 [2024-11-28 13:13:23.998864] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:42:54.132 [2024-11-28 13:13:23.999092] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:42:54.132 [2024-11-28 13:13:23.999197] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:54.132 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:54.133 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13c695ed-45cb-4ef6-8d5c-870ed2ed2484 -t 2000 00:42:54.393 [ 00:42:54.393 { 00:42:54.393 "name": "13c695ed-45cb-4ef6-8d5c-870ed2ed2484", 00:42:54.393 "aliases": [ 00:42:54.393 "lvs/lvol" 00:42:54.393 ], 00:42:54.393 "product_name": "Logical Volume", 00:42:54.393 "block_size": 4096, 00:42:54.393 "num_blocks": 38912, 00:42:54.393 "uuid": "13c695ed-45cb-4ef6-8d5c-870ed2ed2484", 00:42:54.393 "assigned_rate_limits": { 00:42:54.393 "rw_ios_per_sec": 0, 00:42:54.393 "rw_mbytes_per_sec": 0, 00:42:54.393 "r_mbytes_per_sec": 0, 00:42:54.393 "w_mbytes_per_sec": 0 00:42:54.393 }, 00:42:54.393 "claimed": false, 00:42:54.393 "zoned": false, 00:42:54.393 "supported_io_types": { 00:42:54.393 "read": true, 00:42:54.393 "write": true, 00:42:54.393 "unmap": true, 00:42:54.393 "flush": false, 00:42:54.393 "reset": true, 00:42:54.393 "nvme_admin": false, 00:42:54.393 "nvme_io": false, 00:42:54.393 "nvme_io_md": false, 00:42:54.393 "write_zeroes": true, 00:42:54.393 "zcopy": false, 00:42:54.393 "get_zone_info": false, 00:42:54.393 "zone_management": false, 00:42:54.393 "zone_append": false, 00:42:54.393 "compare": false, 00:42:54.393 "compare_and_write": false, 00:42:54.393 "abort": false, 00:42:54.393 "seek_hole": true, 00:42:54.393 "seek_data": true, 00:42:54.393 "copy": false, 00:42:54.393 "nvme_iov_md": false 00:42:54.393 }, 00:42:54.393 "driver_specific": { 00:42:54.393 "lvol": { 00:42:54.394 "lvol_store_uuid": "fa14dd13-053e-4e2c-8d5b-db037a6399e5", 00:42:54.394 "base_bdev": "aio_bdev", 00:42:54.394 "thin_provision": false, 00:42:54.394 "num_allocated_clusters": 38, 00:42:54.394 "snapshot": false, 00:42:54.394 "clone": false, 00:42:54.394 "esnap_clone": false 00:42:54.394 } 00:42:54.394 } 00:42:54.394 } 00:42:54.394 ] 00:42:54.394 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:42:54.394 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:54.394 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:42:54.763 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:42:54.763 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:54.763 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:42:54.763 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:42:54.763 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:54.763 [2024-11-28 13:13:24.845273] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:55.058 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:55.058 request: 00:42:55.058 { 00:42:55.058 "uuid": "fa14dd13-053e-4e2c-8d5b-db037a6399e5", 00:42:55.058 "method": "bdev_lvol_get_lvstores", 00:42:55.058 "req_id": 1 00:42:55.058 } 00:42:55.058 Got JSON-RPC error response 00:42:55.058 response: 00:42:55.058 { 00:42:55.058 "code": -19, 00:42:55.058 "message": "No such device" 00:42:55.058 } 00:42:55.058 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:42:55.058 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:55.058 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:55.058 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:55.058 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:55.319 aio_bdev 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:55.319 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13c695ed-45cb-4ef6-8d5c-870ed2ed2484 -t 2000 00:42:55.580 [ 00:42:55.580 { 00:42:55.580 "name": "13c695ed-45cb-4ef6-8d5c-870ed2ed2484", 00:42:55.580 "aliases": [ 00:42:55.580 "lvs/lvol" 00:42:55.580 ], 00:42:55.580 "product_name": "Logical Volume", 00:42:55.580 "block_size": 4096, 00:42:55.580 "num_blocks": 38912, 00:42:55.580 "uuid": "13c695ed-45cb-4ef6-8d5c-870ed2ed2484", 00:42:55.580 "assigned_rate_limits": { 00:42:55.581 "rw_ios_per_sec": 0, 00:42:55.581 "rw_mbytes_per_sec": 0, 00:42:55.581 "r_mbytes_per_sec": 0, 00:42:55.581 "w_mbytes_per_sec": 0 00:42:55.581 }, 00:42:55.581 "claimed": false, 00:42:55.581 "zoned": false, 00:42:55.581 "supported_io_types": { 00:42:55.581 "read": true, 00:42:55.581 "write": true, 00:42:55.581 "unmap": true, 00:42:55.581 "flush": false, 00:42:55.581 "reset": true, 00:42:55.581 "nvme_admin": false, 00:42:55.581 "nvme_io": false, 00:42:55.581 "nvme_io_md": false, 00:42:55.581 "write_zeroes": true, 00:42:55.581 "zcopy": false, 00:42:55.581 "get_zone_info": false, 00:42:55.581 "zone_management": false, 00:42:55.581 "zone_append": false, 00:42:55.581 "compare": false, 00:42:55.581 "compare_and_write": false, 00:42:55.581 "abort": false, 00:42:55.581 "seek_hole": true, 00:42:55.581 "seek_data": true, 00:42:55.581 "copy": false, 00:42:55.581 "nvme_iov_md": false 00:42:55.581 }, 00:42:55.581 "driver_specific": { 00:42:55.581 "lvol": { 00:42:55.581 "lvol_store_uuid": "fa14dd13-053e-4e2c-8d5b-db037a6399e5", 00:42:55.581 "base_bdev": "aio_bdev", 00:42:55.581 "thin_provision": false, 00:42:55.581 "num_allocated_clusters": 38, 00:42:55.581 "snapshot": false, 00:42:55.581 "clone": false, 00:42:55.581 "esnap_clone": false 00:42:55.581 } 00:42:55.581 } 00:42:55.581 } 00:42:55.581 ] 00:42:55.581 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:42:55.581 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:55.581 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:55.842 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:55.842 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:55.842 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:55.842 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:55.842 13:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13c695ed-45cb-4ef6-8d5c-870ed2ed2484 00:42:56.102 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa14dd13-053e-4e2c-8d5b-db037a6399e5 00:42:56.363 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:56.363 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:56.624 00:42:56.624 real 0m17.567s 00:42:56.624 user 0m35.495s 00:42:56.624 sys 0m2.903s 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:56.624 ************************************ 00:42:56.624 END TEST lvs_grow_dirty 00:42:56.624 ************************************ 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:56.624 nvmf_trace.0 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:56.624 rmmod nvme_tcp 00:42:56.624 rmmod nvme_fabrics 00:42:56.624 rmmod nvme_keyring 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3727181 ']' 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3727181 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3727181 ']' 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3727181 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:56.624 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727181 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727181' 00:42:56.886 killing process with pid 3727181 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3727181 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3727181 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:56.886 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:59.433 00:42:59.433 real 0m44.400s 00:42:59.433 user 0m53.670s 00:42:59.433 sys 0m10.295s 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:59.433 ************************************ 00:42:59.433 END TEST nvmf_lvs_grow 00:42:59.433 ************************************ 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:59.433 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:59.433 ************************************ 00:42:59.433 START TEST nvmf_bdev_io_wait 00:42:59.433 ************************************ 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:59.433 * Looking for test storage... 00:42:59.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:59.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.433 --rc genhtml_branch_coverage=1 00:42:59.433 --rc genhtml_function_coverage=1 00:42:59.433 --rc genhtml_legend=1 00:42:59.433 --rc geninfo_all_blocks=1 00:42:59.433 --rc geninfo_unexecuted_blocks=1 00:42:59.433 00:42:59.433 ' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:59.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.433 --rc genhtml_branch_coverage=1 00:42:59.433 --rc genhtml_function_coverage=1 00:42:59.433 --rc genhtml_legend=1 00:42:59.433 --rc geninfo_all_blocks=1 00:42:59.433 --rc geninfo_unexecuted_blocks=1 00:42:59.433 00:42:59.433 ' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:59.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.433 --rc genhtml_branch_coverage=1 00:42:59.433 --rc genhtml_function_coverage=1 00:42:59.433 --rc genhtml_legend=1 00:42:59.433 --rc geninfo_all_blocks=1 00:42:59.433 --rc geninfo_unexecuted_blocks=1 00:42:59.433 00:42:59.433 ' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:59.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.433 --rc genhtml_branch_coverage=1 00:42:59.433 --rc genhtml_function_coverage=1 00:42:59.433 --rc genhtml_legend=1 00:42:59.433 --rc geninfo_all_blocks=1 00:42:59.433 --rc geninfo_unexecuted_blocks=1 00:42:59.433 00:42:59.433 ' 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:59.433 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:59.434 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:07.582 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:07.582 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:07.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:07.582 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:07.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:07.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:07.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:43:07.583 00:43:07.583 --- 10.0.0.2 ping statistics --- 00:43:07.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.583 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:07.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:07.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:43:07.583 00:43:07.583 --- 10.0.0.1 ping statistics --- 00:43:07.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.583 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3732089 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3732089 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3732089 ']' 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:07.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:07.583 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.583 [2024-11-28 13:13:36.638112] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:07.583 [2024-11-28 13:13:36.639260] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:07.583 [2024-11-28 13:13:36.639314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:07.583 [2024-11-28 13:13:36.783029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:07.583 [2024-11-28 13:13:36.840172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:07.583 [2024-11-28 13:13:36.859673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:07.583 [2024-11-28 13:13:36.859706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:07.583 [2024-11-28 13:13:36.859714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:07.583 [2024-11-28 13:13:36.859720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:07.583 [2024-11-28 13:13:36.859726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:07.583 [2024-11-28 13:13:36.861191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:07.583 [2024-11-28 13:13:36.861280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:07.583 [2024-11-28 13:13:36.861401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.583 [2024-11-28 13:13:36.861402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:07.584 [2024-11-28 13:13:36.862134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 [2024-11-28 13:13:37.528651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:07.584 [2024-11-28 13:13:37.528747] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:07.584 [2024-11-28 13:13:37.529232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:07.584 [2024-11-28 13:13:37.529349] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 [2024-11-28 13:13:37.538646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 Malloc0 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:07.584 [2024-11-28 13:13:37.606878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3732269 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3732271 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.584 { 00:43:07.584 "params": { 00:43:07.584 "name": "Nvme$subsystem", 00:43:07.584 "trtype": "$TEST_TRANSPORT", 00:43:07.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.584 "adrfam": "ipv4", 00:43:07.584 "trsvcid": "$NVMF_PORT", 00:43:07.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.584 "hdgst": ${hdgst:-false}, 00:43:07.584 "ddgst": ${ddgst:-false} 00:43:07.584 }, 00:43:07.584 "method": "bdev_nvme_attach_controller" 00:43:07.584 } 00:43:07.584 EOF 00:43:07.584 )") 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3732273 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3732275 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:43:07.584 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.584 { 00:43:07.584 "params": { 00:43:07.584 "name": "Nvme$subsystem", 00:43:07.584 "trtype": "$TEST_TRANSPORT", 00:43:07.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.584 "adrfam": "ipv4", 00:43:07.584 "trsvcid": "$NVMF_PORT", 00:43:07.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.584 "hdgst": ${hdgst:-false}, 00:43:07.584 "ddgst": ${ddgst:-false} 00:43:07.584 }, 00:43:07.584 "method": "bdev_nvme_attach_controller" 00:43:07.584 } 00:43:07.584 EOF 00:43:07.584 )") 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.585 { 00:43:07.585 "params": { 00:43:07.585 "name": "Nvme$subsystem", 00:43:07.585 "trtype": "$TEST_TRANSPORT", 00:43:07.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.585 "adrfam": "ipv4", 00:43:07.585 "trsvcid": "$NVMF_PORT", 00:43:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.585 "hdgst": ${hdgst:-false}, 00:43:07.585 "ddgst": ${ddgst:-false} 00:43:07.585 }, 00:43:07.585 "method": "bdev_nvme_attach_controller" 00:43:07.585 } 00:43:07.585 EOF 00:43:07.585 )") 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.585 { 00:43:07.585 "params": { 00:43:07.585 "name": "Nvme$subsystem", 00:43:07.585 "trtype": "$TEST_TRANSPORT", 00:43:07.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.585 "adrfam": "ipv4", 00:43:07.585 "trsvcid": "$NVMF_PORT", 00:43:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.585 "hdgst": ${hdgst:-false}, 00:43:07.585 "ddgst": ${ddgst:-false} 00:43:07.585 }, 00:43:07.585 "method": "bdev_nvme_attach_controller" 00:43:07.585 } 00:43:07.585 EOF 00:43:07.585 )") 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3732269 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.585 "params": { 00:43:07.585 "name": "Nvme1", 00:43:07.585 "trtype": "tcp", 00:43:07.585 "traddr": "10.0.0.2", 00:43:07.585 "adrfam": "ipv4", 00:43:07.585 "trsvcid": "4420", 00:43:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:07.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:07.585 "hdgst": false, 00:43:07.585 "ddgst": false 00:43:07.585 }, 00:43:07.585 "method": "bdev_nvme_attach_controller" 00:43:07.585 }' 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.585 "params": { 00:43:07.585 "name": "Nvme1", 00:43:07.585 "trtype": "tcp", 00:43:07.585 "traddr": "10.0.0.2", 00:43:07.585 "adrfam": "ipv4", 00:43:07.585 "trsvcid": "4420", 00:43:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:07.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:07.585 "hdgst": false, 00:43:07.585 "ddgst": false 00:43:07.585 }, 00:43:07.585 "method": "bdev_nvme_attach_controller" 00:43:07.585 }' 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.585 "params": { 00:43:07.585 "name": "Nvme1", 00:43:07.585 "trtype": "tcp", 00:43:07.585 "traddr": "10.0.0.2", 00:43:07.585 "adrfam": "ipv4", 00:43:07.585 "trsvcid": "4420", 00:43:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:07.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:07.585 "hdgst": false, 00:43:07.585 "ddgst": false 00:43:07.585 }, 00:43:07.585 "method": "bdev_nvme_attach_controller" 00:43:07.585 }' 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:43:07.585 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.585 "params": { 00:43:07.585 "name": "Nvme1", 00:43:07.585 "trtype": "tcp", 00:43:07.585 "traddr": "10.0.0.2", 00:43:07.585 "adrfam": "ipv4", 00:43:07.585 "trsvcid": "4420", 00:43:07.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:07.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:07.585 "hdgst": false, 00:43:07.585 "ddgst": false 00:43:07.585 }, 00:43:07.585 "method": "bdev_nvme_attach_controller" 00:43:07.585 }' 00:43:07.585 [2024-11-28 13:13:37.662631] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:07.585 [2024-11-28 13:13:37.662687] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:43:07.585 [2024-11-28 13:13:37.663869] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:07.585 [2024-11-28 13:13:37.663920] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:43:07.585 [2024-11-28 13:13:37.664153] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:07.585 [2024-11-28 13:13:37.664209] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:43:07.585 [2024-11-28 13:13:37.667476] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:07.585 [2024-11-28 13:13:37.667521] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:43:07.846 [2024-11-28 13:13:37.865561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:07.846 [2024-11-28 13:13:37.909717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:07.846 [2024-11-28 13:13:37.925925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.846 [2024-11-28 13:13:37.937935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:07.846 [2024-11-28 13:13:37.955218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:07.846 [2024-11-28 13:13:37.956027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.846 [2024-11-28 13:13:37.967798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:43:08.106 [2024-11-28 13:13:38.002561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:08.106 [2024-11-28 13:13:38.013767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.106 [2024-11-28 13:13:38.024592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:08.106 [2024-11-28 13:13:38.064257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.106 [2024-11-28 13:13:38.074964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:08.106 Running I/O for 1 seconds... 00:43:08.106 Running I/O for 1 seconds... 00:43:08.106 Running I/O for 1 seconds... 00:43:08.106 Running I/O for 1 seconds... 00:43:09.048 7883.00 IOPS, 30.79 MiB/s 00:43:09.048 Latency(us) 00:43:09.048 [2024-11-28T12:13:39.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.048 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:43:09.048 Nvme1n1 : 1.02 7914.11 30.91 0.00 0.00 16058.43 2039.10 21130.05 00:43:09.048 [2024-11-28T12:13:39.175Z] =================================================================================================================== 00:43:09.048 [2024-11-28T12:13:39.175Z] Total : 7914.11 30.91 0.00 0.00 16058.43 2039.10 21130.05 00:43:09.048 7723.00 IOPS, 30.17 MiB/s [2024-11-28T12:13:39.175Z] 180872.00 IOPS, 706.53 MiB/s 00:43:09.048 Latency(us) 00:43:09.048 [2024-11-28T12:13:39.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.048 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:43:09.048 Nvme1n1 : 1.00 180518.78 705.15 0.00 0.00 705.49 295.94 1970.68 00:43:09.048 [2024-11-28T12:13:39.175Z] =================================================================================================================== 00:43:09.048 [2024-11-28T12:13:39.175Z] Total : 180518.78 705.15 0.00 0.00 705.49 295.94 1970.68 00:43:09.048 00:43:09.048 Latency(us) 00:43:09.048 [2024-11-28T12:13:39.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.048 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:43:09.048 Nvme1n1 : 1.01 7841.96 30.63 0.00 0.00 16278.90 4187.69 26385.19 00:43:09.048 [2024-11-28T12:13:39.175Z] =================================================================================================================== 00:43:09.048 [2024-11-28T12:13:39.175Z] Total : 7841.96 30.63 0.00 0.00 16278.90 4187.69 26385.19 00:43:09.308 12689.00 IOPS, 49.57 MiB/s 00:43:09.308 Latency(us) 00:43:09.308 [2024-11-28T12:13:39.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.308 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:43:09.308 Nvme1n1 : 1.01 12741.31 49.77 0.00 0.00 10014.70 4543.51 15327.50 00:43:09.308 [2024-11-28T12:13:39.435Z] =================================================================================================================== 00:43:09.308 [2024-11-28T12:13:39.435Z] Total : 12741.31 49.77 0.00 0.00 10014.70 4543.51 15327.50 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3732271 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3732273 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3732275 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:09.308 rmmod nvme_tcp 00:43:09.308 rmmod nvme_fabrics 00:43:09.308 rmmod nvme_keyring 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3732089 ']' 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3732089 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3732089 ']' 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3732089 00:43:09.308 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732089 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732089' 00:43:09.309 killing process with pid 3732089 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3732089 00:43:09.309 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3732089 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:09.568 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.514 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:11.514 00:43:11.514 real 0m12.585s 00:43:11.514 user 0m14.186s 00:43:11.514 sys 0m7.362s 00:43:11.514 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:11.514 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:11.514 ************************************ 00:43:11.514 END TEST nvmf_bdev_io_wait 00:43:11.514 ************************************ 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:11.775 ************************************ 00:43:11.775 START TEST nvmf_queue_depth 00:43:11.775 ************************************ 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:43:11.775 * Looking for test storage... 00:43:11.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:11.775 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:11.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.775 --rc genhtml_branch_coverage=1 00:43:11.775 --rc genhtml_function_coverage=1 00:43:11.775 --rc genhtml_legend=1 00:43:11.775 --rc geninfo_all_blocks=1 00:43:11.775 --rc geninfo_unexecuted_blocks=1 00:43:11.775 00:43:11.775 ' 00:43:11.776 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.776 --rc genhtml_branch_coverage=1 00:43:11.776 --rc genhtml_function_coverage=1 00:43:11.776 --rc genhtml_legend=1 00:43:11.776 --rc geninfo_all_blocks=1 00:43:11.776 --rc geninfo_unexecuted_blocks=1 00:43:11.776 00:43:11.776 ' 00:43:11.776 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.776 --rc genhtml_branch_coverage=1 00:43:11.776 --rc genhtml_function_coverage=1 00:43:11.776 --rc genhtml_legend=1 00:43:11.776 --rc geninfo_all_blocks=1 00:43:11.776 --rc geninfo_unexecuted_blocks=1 00:43:11.776 00:43:11.776 ' 00:43:11.776 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:11.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.776 --rc genhtml_branch_coverage=1 00:43:11.776 --rc genhtml_function_coverage=1 00:43:11.776 --rc genhtml_legend=1 00:43:11.776 --rc geninfo_all_blocks=1 00:43:11.776 --rc geninfo_unexecuted_blocks=1 00:43:11.776 00:43:11.776 ' 00:43:11.776 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:11.776 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:43:12.037 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:20.176 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:20.177 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:20.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:20.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:20.177 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:20.177 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:20.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:20.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:43:20.177 00:43:20.177 --- 10.0.0.2 ping statistics --- 00:43:20.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:20.177 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:43:20.177 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:20.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:20.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:43:20.178 00:43:20.178 --- 10.0.0.1 ping statistics --- 00:43:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:20.178 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3736725 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3736725 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3736725 ']' 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:20.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:20.178 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.178 [2024-11-28 13:13:49.419263] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:20.178 [2024-11-28 13:13:49.420389] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:20.178 [2024-11-28 13:13:49.420444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:20.178 [2024-11-28 13:13:49.567437] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:20.178 [2024-11-28 13:13:49.626593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:20.178 [2024-11-28 13:13:49.652749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:20.178 [2024-11-28 13:13:49.652801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:20.178 [2024-11-28 13:13:49.652810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:20.178 [2024-11-28 13:13:49.652817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:20.178 [2024-11-28 13:13:49.652823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:20.178 [2024-11-28 13:13:49.653552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:20.178 [2024-11-28 13:13:49.720267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:20.178 [2024-11-28 13:13:49.720552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.178 [2024-11-28 13:13:50.278398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.178 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.439 Malloc0 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.439 [2024-11-28 13:13:50.358424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3736976 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3736976 /var/tmp/bdevperf.sock 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3736976 ']' 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:20.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:20.439 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:20.439 [2024-11-28 13:13:50.413116] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:20.439 [2024-11-28 13:13:50.413180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736976 ] 00:43:20.439 [2024-11-28 13:13:50.545947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:20.699 [2024-11-28 13:13:50.608017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:20.699 [2024-11-28 13:13:50.626864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.271 NVMe0n1 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.271 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:21.532 Running I/O for 10 seconds... 00:43:23.414 8324.00 IOPS, 32.52 MiB/s [2024-11-28T12:13:54.484Z] 8698.00 IOPS, 33.98 MiB/s [2024-11-28T12:13:55.865Z] 9218.67 IOPS, 36.01 MiB/s [2024-11-28T12:13:56.436Z] 10015.75 IOPS, 39.12 MiB/s [2024-11-28T12:13:57.819Z] 10665.20 IOPS, 41.66 MiB/s [2024-11-28T12:13:58.759Z] 11111.83 IOPS, 43.41 MiB/s [2024-11-28T12:13:59.701Z] 11434.57 IOPS, 44.67 MiB/s [2024-11-28T12:14:00.644Z] 11698.00 IOPS, 45.70 MiB/s [2024-11-28T12:14:01.584Z] 11894.67 IOPS, 46.46 MiB/s [2024-11-28T12:14:01.584Z] 12080.70 IOPS, 47.19 MiB/s 00:43:31.457 Latency(us) 00:43:31.457 [2024-11-28T12:14:01.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.457 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:43:31.457 Verification LBA range: start 0x0 length 0x4000 00:43:31.457 NVMe0n1 : 10.10 12063.60 47.12 0.00 0.00 84255.30 25728.30 74885.77 00:43:31.457 [2024-11-28T12:14:01.584Z] =================================================================================================================== 00:43:31.457 [2024-11-28T12:14:01.584Z] Total : 12063.60 47.12 0.00 0.00 84255.30 25728.30 74885.77 00:43:31.457 { 00:43:31.457 "results": [ 00:43:31.457 { 00:43:31.457 "job": "NVMe0n1", 00:43:31.457 "core_mask": "0x1", 00:43:31.457 "workload": "verify", 00:43:31.457 "status": "finished", 00:43:31.457 "verify_range": { 00:43:31.457 "start": 0, 00:43:31.457 "length": 16384 00:43:31.457 }, 00:43:31.457 "queue_depth": 1024, 00:43:31.457 "io_size": 4096, 00:43:31.457 "runtime": 10.099058, 00:43:31.457 "iops": 12063.600387283646, 00:43:31.457 "mibps": 47.12343901282674, 00:43:31.457 "io_failed": 0, 00:43:31.457 "io_timeout": 0, 00:43:31.457 "avg_latency_us": 84255.30496577224, 00:43:31.457 "min_latency_us": 25728.299365185434, 00:43:31.457 "max_latency_us": 74885.77347143335 00:43:31.457 } 00:43:31.457 ], 00:43:31.457 "core_count": 1 00:43:31.457 } 00:43:31.457 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3736976 00:43:31.457 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3736976 ']' 00:43:31.457 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3736976 00:43:31.457 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:43:31.457 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:31.457 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3736976 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3736976' 00:43:31.717 killing process with pid 3736976 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3736976 00:43:31.717 Received shutdown signal, test time was about 10.000000 seconds 00:43:31.717 00:43:31.717 Latency(us) 00:43:31.717 [2024-11-28T12:14:01.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.717 [2024-11-28T12:14:01.844Z] =================================================================================================================== 00:43:31.717 [2024-11-28T12:14:01.844Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3736976 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:31.717 rmmod nvme_tcp 00:43:31.717 rmmod nvme_fabrics 00:43:31.717 rmmod nvme_keyring 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3736725 ']' 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3736725 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3736725 ']' 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3736725 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:31.717 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3736725 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3736725' 00:43:31.978 killing process with pid 3736725 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3736725 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3736725 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:31.978 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:34.526 00:43:34.526 real 0m22.348s 00:43:34.526 user 0m24.536s 00:43:34.526 sys 0m7.293s 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:34.526 ************************************ 00:43:34.526 END TEST nvmf_queue_depth 00:43:34.526 ************************************ 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:34.526 ************************************ 00:43:34.526 START TEST nvmf_target_multipath 00:43:34.526 ************************************ 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:43:34.526 * Looking for test storage... 00:43:34.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:43:34.526 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:34.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.527 --rc genhtml_branch_coverage=1 00:43:34.527 --rc genhtml_function_coverage=1 00:43:34.527 --rc genhtml_legend=1 00:43:34.527 --rc geninfo_all_blocks=1 00:43:34.527 --rc geninfo_unexecuted_blocks=1 00:43:34.527 00:43:34.527 ' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:34.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.527 --rc genhtml_branch_coverage=1 00:43:34.527 --rc genhtml_function_coverage=1 00:43:34.527 --rc genhtml_legend=1 00:43:34.527 --rc geninfo_all_blocks=1 00:43:34.527 --rc geninfo_unexecuted_blocks=1 00:43:34.527 00:43:34.527 ' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:34.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.527 --rc genhtml_branch_coverage=1 00:43:34.527 --rc genhtml_function_coverage=1 00:43:34.527 --rc genhtml_legend=1 00:43:34.527 --rc geninfo_all_blocks=1 00:43:34.527 --rc geninfo_unexecuted_blocks=1 00:43:34.527 00:43:34.527 ' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:34.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.527 --rc genhtml_branch_coverage=1 00:43:34.527 --rc genhtml_function_coverage=1 00:43:34.527 --rc genhtml_legend=1 00:43:34.527 --rc geninfo_all_blocks=1 00:43:34.527 --rc geninfo_unexecuted_blocks=1 00:43:34.527 00:43:34.527 ' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:34.527 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:34.528 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:34.528 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.528 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:34.528 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:34.528 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:43:34.528 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:42.672 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:42.672 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:42.672 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:42.673 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:42.673 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:42.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:42.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:43:42.673 00:43:42.673 --- 10.0.0.2 ping statistics --- 00:43:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:42.673 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:42.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:42.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:43:42.673 00:43:42.673 --- 10.0.0.1 ping statistics --- 00:43:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:42.673 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:43:42.673 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:43:42.674 only one NIC for nvmf test 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:42.674 rmmod nvme_tcp 00:43:42.674 rmmod nvme_fabrics 00:43:42.674 rmmod nvme_keyring 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:42.674 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.059 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:44.059 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:44.060 00:43:44.060 real 0m9.672s 00:43:44.060 user 0m2.110s 00:43:44.060 sys 0m5.500s 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:44.060 ************************************ 00:43:44.060 END TEST nvmf_target_multipath 00:43:44.060 ************************************ 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:44.060 ************************************ 00:43:44.060 START TEST nvmf_zcopy 00:43:44.060 ************************************ 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:44.060 * Looking for test storage... 00:43:44.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:43:44.060 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:44.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.060 --rc genhtml_branch_coverage=1 00:43:44.060 --rc genhtml_function_coverage=1 00:43:44.060 --rc genhtml_legend=1 00:43:44.060 --rc geninfo_all_blocks=1 00:43:44.060 --rc geninfo_unexecuted_blocks=1 00:43:44.060 00:43:44.060 ' 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:44.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.060 --rc genhtml_branch_coverage=1 00:43:44.060 --rc genhtml_function_coverage=1 00:43:44.060 --rc genhtml_legend=1 00:43:44.060 --rc geninfo_all_blocks=1 00:43:44.060 --rc geninfo_unexecuted_blocks=1 00:43:44.060 00:43:44.060 ' 00:43:44.060 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:44.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.060 --rc genhtml_branch_coverage=1 00:43:44.060 --rc genhtml_function_coverage=1 00:43:44.060 --rc genhtml_legend=1 00:43:44.060 --rc geninfo_all_blocks=1 00:43:44.060 --rc geninfo_unexecuted_blocks=1 00:43:44.060 00:43:44.060 ' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:44.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.061 --rc genhtml_branch_coverage=1 00:43:44.061 --rc genhtml_function_coverage=1 00:43:44.061 --rc genhtml_legend=1 00:43:44.061 --rc geninfo_all_blocks=1 00:43:44.061 --rc geninfo_unexecuted_blocks=1 00:43:44.061 00:43:44.061 ' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:43:44.061 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:52.202 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:52.203 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:52.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:52.203 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:52.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:52.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:52.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:43:52.203 00:43:52.203 --- 10.0.0.2 ping statistics --- 00:43:52.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.203 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:52.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:52.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:43:52.203 00:43:52.203 --- 10.0.0.1 ping statistics --- 00:43:52.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.203 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3747314 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3747314 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3747314 ']' 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:52.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:52.203 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.203 [2024-11-28 13:14:21.481042] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:52.203 [2024-11-28 13:14:21.482173] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:52.203 [2024-11-28 13:14:21.482227] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:52.203 [2024-11-28 13:14:21.625042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:52.203 [2024-11-28 13:14:21.666351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.203 [2024-11-28 13:14:21.683428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:52.203 [2024-11-28 13:14:21.683461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:52.203 [2024-11-28 13:14:21.683469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:52.203 [2024-11-28 13:14:21.683476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:52.203 [2024-11-28 13:14:21.683482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:52.203 [2024-11-28 13:14:21.684019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:52.203 [2024-11-28 13:14:21.733572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:52.203 [2024-11-28 13:14:21.733824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:52.203 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:52.203 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:43:52.203 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:52.203 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:52.203 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 [2024-11-28 13:14:22.360802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 [2024-11-28 13:14:22.389101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 malloc0 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:52.464 { 00:43:52.464 "params": { 00:43:52.464 "name": "Nvme$subsystem", 00:43:52.464 "trtype": "$TEST_TRANSPORT", 00:43:52.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.464 "adrfam": "ipv4", 00:43:52.464 "trsvcid": "$NVMF_PORT", 00:43:52.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.464 "hdgst": ${hdgst:-false}, 00:43:52.464 "ddgst": ${ddgst:-false} 00:43:52.464 }, 00:43:52.464 "method": "bdev_nvme_attach_controller" 00:43:52.464 } 00:43:52.464 EOF 00:43:52.464 )") 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:52.464 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:52.464 "params": { 00:43:52.464 "name": "Nvme1", 00:43:52.464 "trtype": "tcp", 00:43:52.464 "traddr": "10.0.0.2", 00:43:52.464 "adrfam": "ipv4", 00:43:52.464 "trsvcid": "4420", 00:43:52.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:52.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:52.464 "hdgst": false, 00:43:52.464 "ddgst": false 00:43:52.464 }, 00:43:52.464 "method": "bdev_nvme_attach_controller" 00:43:52.464 }' 00:43:52.464 [2024-11-28 13:14:22.494564] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:43:52.464 [2024-11-28 13:14:22.494638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747433 ] 00:43:52.725 [2024-11-28 13:14:22.634053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:52.725 [2024-11-28 13:14:22.679732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.725 [2024-11-28 13:14:22.708099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.986 Running I/O for 10 seconds... 00:43:55.311 6390.00 IOPS, 49.92 MiB/s [2024-11-28T12:14:26.382Z] 6424.00 IOPS, 50.19 MiB/s [2024-11-28T12:14:27.324Z] 6433.00 IOPS, 50.26 MiB/s [2024-11-28T12:14:28.266Z] 6501.50 IOPS, 50.79 MiB/s [2024-11-28T12:14:29.270Z] 7120.40 IOPS, 55.63 MiB/s [2024-11-28T12:14:30.245Z] 7532.17 IOPS, 58.85 MiB/s [2024-11-28T12:14:31.185Z] 7816.71 IOPS, 61.07 MiB/s [2024-11-28T12:14:32.126Z] 8038.25 IOPS, 62.80 MiB/s [2024-11-28T12:14:33.511Z] 8212.44 IOPS, 64.16 MiB/s [2024-11-28T12:14:33.511Z] 8354.00 IOPS, 65.27 MiB/s 00:44:03.384 Latency(us) 00:44:03.384 [2024-11-28T12:14:33.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:03.384 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:44:03.384 Verification LBA range: start 0x0 length 0x1000 00:44:03.384 Nvme1n1 : 10.01 8357.54 65.29 0.00 0.00 15271.80 2271.75 27698.98 00:44:03.384 [2024-11-28T12:14:33.511Z] =================================================================================================================== 00:44:03.384 [2024-11-28T12:14:33.511Z] Total : 8357.54 65.29 0.00 0.00 15271.80 2271.75 27698.98 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3749362 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:44:03.384 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:03.385 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:03.385 { 00:44:03.385 "params": { 00:44:03.385 "name": "Nvme$subsystem", 00:44:03.385 "trtype": "$TEST_TRANSPORT", 00:44:03.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:03.385 "adrfam": "ipv4", 00:44:03.385 "trsvcid": "$NVMF_PORT", 00:44:03.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:03.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:03.385 "hdgst": ${hdgst:-false}, 00:44:03.385 "ddgst": ${ddgst:-false} 00:44:03.385 }, 00:44:03.385 "method": "bdev_nvme_attach_controller" 00:44:03.385 } 00:44:03.385 EOF 00:44:03.385 )") 00:44:03.385 [2024-11-28 13:14:33.180364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.180395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:44:03.385 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:44:03.385 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:44:03.385 13:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:03.385 "params": { 00:44:03.385 "name": "Nvme1", 00:44:03.385 "trtype": "tcp", 00:44:03.385 "traddr": "10.0.0.2", 00:44:03.385 "adrfam": "ipv4", 00:44:03.385 "trsvcid": "4420", 00:44:03.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:03.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:03.385 "hdgst": false, 00:44:03.385 "ddgst": false 00:44:03.385 }, 00:44:03.385 "method": "bdev_nvme_attach_controller" 00:44:03.385 }' 00:44:03.385 [2024-11-28 13:14:33.192322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.192332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.198811] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:44:03.385 [2024-11-28 13:14:33.198849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749362 ] 00:44:03.385 [2024-11-28 13:14:33.204318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.204325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.216317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.216324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.228317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.228325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.240316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.240324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.252317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.252324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.264317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.264325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.276319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.276328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.288317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.288325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.300316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.300324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.312317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.312324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.322964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:03.385 [2024-11-28 13:14:33.324317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.324324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.336316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.336323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.348317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.348325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.360316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.360323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.372317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.372328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.377419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:03.385 [2024-11-28 13:14:33.384317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.384326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.393373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.385 [2024-11-28 13:14:33.396318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.396327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.408324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.408333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.420321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.420333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.432318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.432329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.444318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.444328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.456381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.456398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.468319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.468329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.480319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.480329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.492319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.492330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.385 [2024-11-28 13:14:33.504317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.385 [2024-11-28 13:14:33.504325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.516316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.516324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.528315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.528322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.540317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.540326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.552315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.552323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.564315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.564322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.576316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.576323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.588316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.588330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.600316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.600323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.612315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.612323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.624321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.624329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.636322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.636337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.648317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.648328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 Running I/O for 5 seconds... 00:44:03.646 [2024-11-28 13:14:33.664121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.664138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.677370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.677385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.691604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.691621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.704843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.704859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.719706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.719722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.733052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.733068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.747514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.747529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.646 [2024-11-28 13:14:33.760678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.646 [2024-11-28 13:14:33.760692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.775562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.775578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.788526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.788541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.801644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.801659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.816019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.816034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.829178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.829192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.843836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.843851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.856910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.856924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.871484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.871498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.884809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.884823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.899367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.899382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.912481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.912495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.925377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.925392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.939693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.939708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.952519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.952534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.965237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.965251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.979972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.979987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:33.993249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:33.993263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:34.007679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:34.007694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:03.907 [2024-11-28 13:14:34.020739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:03.907 [2024-11-28 13:14:34.020753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.035347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.035362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.048233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.048247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.061571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.061585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.075605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.075619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.088845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.088859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.103194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.103209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.116021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.116036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.128879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.128893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.143445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.143460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.156584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.156597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.171150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.171169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.184050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.184065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.197781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.197795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.211783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.211797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.224908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.224922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.238971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.238986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.252133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.252147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.265183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.265197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.279691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.279707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.168 [2024-11-28 13:14:34.292863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.168 [2024-11-28 13:14:34.292877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.307472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.307487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.320571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.320585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.333432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.333445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.347815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.347830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.361232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.361247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.375432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.375446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.388364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.388378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.400388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.400403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.413578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.413592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.428245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.428260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.441363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.441377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.455659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.455674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.468658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.468672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.482970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.482984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.496029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.496044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.509394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.509408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.523329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.523344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.536223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.536237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.429 [2024-11-28 13:14:34.549504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.429 [2024-11-28 13:14:34.549519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.563633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.563648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.576732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.576746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.591057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.591071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.604226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.604240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.617476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.617490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.631424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.631439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.644030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.644045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 18937.00 IOPS, 147.95 MiB/s [2024-11-28T12:14:34.816Z] [2024-11-28 13:14:34.656615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.656629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.671146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.671166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.683970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.683984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.697035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.697049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.711698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.711712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.724975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.724990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.739862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.739877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.752828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.752843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.767435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.767449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.780485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.780499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.793362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.793376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.689 [2024-11-28 13:14:34.807272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.689 [2024-11-28 13:14:34.807286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.820186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.820201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.832952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.832965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.847308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.847322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.860515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.860534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.873313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.873327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.887561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.887575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.900612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.900626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.915502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.915516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.928713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.928726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.943272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.943287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.956001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.956015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.969134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.969148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.984095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.984109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:34.997130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:34.997144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:35.011127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:35.011141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:35.024236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:35.024251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:35.037225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:35.037239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:35.051910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:35.051925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.949 [2024-11-28 13:14:35.065184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.949 [2024-11-28 13:14:35.065198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.079506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.079521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.092533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.092548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.105642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.105656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.119473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.119492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.132657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.132671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.147084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.147098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.160102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.160117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.173049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.173063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.187617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.187631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.200746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.200760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.215232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.215247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.228392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.228406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.241525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.241539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.255352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.255367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.268298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.268312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.281281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.281295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.295632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.295646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.308826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.308840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.209 [2024-11-28 13:14:35.323735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.209 [2024-11-28 13:14:35.323749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.336882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.336896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.352283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.352298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.365536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.365551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.379525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.379544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.392750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.392765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.407391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.407406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.420383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.420397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.433231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.433245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.447895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.447910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.461118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.461132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.475444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.475459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.488731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.488745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.468 [2024-11-28 13:14:35.503152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.468 [2024-11-28 13:14:35.503173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.469 [2024-11-28 13:14:35.516265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.469 [2024-11-28 13:14:35.516280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.469 [2024-11-28 13:14:35.529086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.469 [2024-11-28 13:14:35.529100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.469 [2024-11-28 13:14:35.543482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.469 [2024-11-28 13:14:35.543496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.469 [2024-11-28 13:14:35.556502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.469 [2024-11-28 13:14:35.556517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.469 [2024-11-28 13:14:35.569004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.469 [2024-11-28 13:14:35.569018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.469 [2024-11-28 13:14:35.583561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.469 [2024-11-28 13:14:35.583575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.596448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.596463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.609290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.609305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.623761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.623776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.636743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.636757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 18964.00 IOPS, 148.16 MiB/s [2024-11-28T12:14:35.856Z] [2024-11-28 13:14:35.651595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.651609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.664968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.664983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.679911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.679925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.693107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.693121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.708060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.708074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.721083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.721097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.735771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.735786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.749119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.749133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.763771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.763786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.776608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.776622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.791561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.791576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.804600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.804614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.819161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.819176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.832088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.832103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.729 [2024-11-28 13:14:35.844601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.729 [2024-11-28 13:14:35.844615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.860079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.860094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.873241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.873255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.887470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.887485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.900695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.900709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.915728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.915742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.929079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.929093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.943730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.943745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.956950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.956964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.971116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.971130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.984409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.984423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:35.997081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:35.997095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.011436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.011450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.024481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.024496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.037120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.037134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.051447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.051461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.064721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.064735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.079618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.079633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.092840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.092854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.107788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.107802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.006 [2024-11-28 13:14:36.121014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.006 [2024-11-28 13:14:36.121028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.135196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.135211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.148170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.148188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.161254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.161269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.175449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.175464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.188711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.188725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.203634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.203649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.216864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.216877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.231677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.231692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.266 [2024-11-28 13:14:36.244832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.266 [2024-11-28 13:14:36.244846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.259838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.259852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.273182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.273196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.287502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.287517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.300464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.300478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.313556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.313570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.327626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.327640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.340337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.340351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.353456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.353470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.367503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.367517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.267 [2024-11-28 13:14:36.380579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.267 [2024-11-28 13:14:36.380593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.395747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.395761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.408611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.408628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.423438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.423452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.436360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.436374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.449427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.449443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.463368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.463383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.476382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.526 [2024-11-28 13:14:36.476396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.526 [2024-11-28 13:14:36.489111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.489125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.503729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.503743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.516671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.516685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.531496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.531510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.544658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.544671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.559329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.559343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.572541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.572555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.585340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.585354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.599375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.599389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.612682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.612696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.627427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.627441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.527 [2024-11-28 13:14:36.640556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.527 [2024-11-28 13:14:36.640570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 18973.33 IOPS, 148.23 MiB/s [2024-11-28T12:14:36.913Z] [2024-11-28 13:14:36.653211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.653225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.667424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.667443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.680604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.680618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.695621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.695636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.708958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.708971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.723417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.723431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.736514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.736529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.749348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.749362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.763601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.763616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.776750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.776764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.791472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.791487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.804569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.804585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.817203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.817217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.831336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.831351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.844276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.844290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.857304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.857318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.871905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.871919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.786 [2024-11-28 13:14:36.885277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.786 [2024-11-28 13:14:36.885291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.787 [2024-11-28 13:14:36.899469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.787 [2024-11-28 13:14:36.899485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.912745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.912760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.926746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.926760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.939914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.939928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.952653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.952667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.967747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.967762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.980934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.980948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:36.995587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:36.995602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.008497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.008512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.021405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.021420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.035495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.035510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.048967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.048981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.063898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.063913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.077213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.077228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.091569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.091584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.104802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.104816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.119411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.119425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.132527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.132541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.145317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.145331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.046 [2024-11-28 13:14:37.159373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.046 [2024-11-28 13:14:37.159388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.172656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.172671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.187379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.187393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.200383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.200397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.213121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.213135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.228110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.228125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.241346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.241361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.255340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.255354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.268647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.268661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.283567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.283582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.296833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.296847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.311323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.311337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.324552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.324566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.337451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.337465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.352058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.352073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.365168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.365182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.379827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.379841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.393313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.393328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.408138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.408153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.307 [2024-11-28 13:14:37.421167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.307 [2024-11-28 13:14:37.421182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.435581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.435596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.448556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.448571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.461530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.461544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.475409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.475423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.488545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.488559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.501273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.501287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.515373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.515388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.528589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.528604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.541231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.541246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.555444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.555459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.568565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.568579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.581579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.581593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.595527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.595542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.608519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.608534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.621495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.621510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.635752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.635767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.648753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.648766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 18964.25 IOPS, 148.16 MiB/s [2024-11-28T12:14:37.696Z] [2024-11-28 13:14:37.663475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.663490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.676771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.676785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.569 [2024-11-28 13:14:37.691903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.569 [2024-11-28 13:14:37.691922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.830 [2024-11-28 13:14:37.705071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.705085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.719293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.719307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.732246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.732260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.744654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.744668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.759882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.759896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.773152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.773170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.787731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.787746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.800875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.800890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.815428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.815442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.828563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.828578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.841790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.841805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.856042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.856056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.869197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.869211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.883593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.883607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.896407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.896422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.909607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.909621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.923760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.923775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.936685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.936699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.831 [2024-11-28 13:14:37.951795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.831 [2024-11-28 13:14:37.951812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:37.964617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:37.964630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:37.979171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:37.979185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:37.992173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:37.992188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:38.005057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:38.005071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:38.019161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:38.019175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:38.032126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:38.032140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:38.045022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:38.045035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.091 [2024-11-28 13:14:38.059521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.091 [2024-11-28 13:14:38.059536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.072854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.072868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.087516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.087530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.100454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.100468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.113621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.113635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.127639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.127653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.140874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.140888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.155412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.155427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.168893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.168907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.183672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.183686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.196649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.196664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.092 [2024-11-28 13:14:38.211612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.092 [2024-11-28 13:14:38.211633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.224783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.224796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.239247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.239261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.252297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.252311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.265315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.265328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.279569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.279583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.292645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.292659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.307911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.307926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.321208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.321222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.335608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.335622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.348911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.348924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.363356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.363370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.376101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.376115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.389351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.389365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.403492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.403507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.416392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.416406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.429050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.429064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.443956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.443971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.456950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.456964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.353 [2024-11-28 13:14:38.471750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.353 [2024-11-28 13:14:38.471768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.484974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.484988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.499616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.499631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.512460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.512474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.525370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.525384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.539338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.539352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.552205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.552220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.565132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.565146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.579575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.579589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.592961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.592975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.607543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.607557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.620878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.620892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.635467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.635482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.648579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.648592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 18964.80 IOPS, 148.16 MiB/s 00:44:08.614 Latency(us) 00:44:08.614 [2024-11-28T12:14:38.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:08.614 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:44:08.614 Nvme1n1 : 5.01 18969.55 148.20 0.00 0.00 6742.19 2641.26 11495.62 00:44:08.614 [2024-11-28T12:14:38.741Z] =================================================================================================================== 00:44:08.614 [2024-11-28T12:14:38.741Z] Total : 18969.55 148.20 0.00 0.00 6742.19 2641.26 11495.62 00:44:08.614 [2024-11-28 13:14:38.660322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.660336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.672325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.672341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.684326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.684340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.696324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.696336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.708323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.708335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.720318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.720329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.614 [2024-11-28 13:14:38.732318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.614 [2024-11-28 13:14:38.732328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.875 [2024-11-28 13:14:38.744320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.875 [2024-11-28 13:14:38.744331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.875 [2024-11-28 13:14:38.756317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.875 [2024-11-28 13:14:38.756325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3749362) - No such process 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3749362 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:08.875 delay0 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.875 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:44:08.876 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.876 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:08.876 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.876 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:44:09.136 [2024-11-28 13:14:39.023607] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:44:15.717 Initializing NVMe Controllers 00:44:15.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:15.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:44:15.717 Initialization complete. Launching workers. 00:44:15.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6176 00:44:15.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6454, failed to submit 42 00:44:15.717 success 6263, unsuccessful 191, failed 0 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:15.717 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:15.717 rmmod nvme_tcp 00:44:15.717 rmmod nvme_fabrics 00:44:15.977 rmmod nvme_keyring 00:44:15.977 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3747314 ']' 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3747314 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3747314 ']' 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3747314 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747314 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747314' 00:44:15.978 killing process with pid 3747314 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3747314 00:44:15.978 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3747314 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:15.978 13:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:18.521 00:44:18.521 real 0m34.259s 00:44:18.521 user 0m43.550s 00:44:18.521 sys 0m12.466s 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:18.521 ************************************ 00:44:18.521 END TEST nvmf_zcopy 00:44:18.521 ************************************ 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:18.521 ************************************ 00:44:18.521 START TEST nvmf_nmic 00:44:18.521 ************************************ 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:44:18.521 * Looking for test storage... 00:44:18.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:18.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.521 --rc genhtml_branch_coverage=1 00:44:18.521 --rc genhtml_function_coverage=1 00:44:18.521 --rc genhtml_legend=1 00:44:18.521 --rc geninfo_all_blocks=1 00:44:18.521 --rc geninfo_unexecuted_blocks=1 00:44:18.521 00:44:18.521 ' 00:44:18.521 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:18.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.521 --rc genhtml_branch_coverage=1 00:44:18.522 --rc genhtml_function_coverage=1 00:44:18.522 --rc genhtml_legend=1 00:44:18.522 --rc geninfo_all_blocks=1 00:44:18.522 --rc geninfo_unexecuted_blocks=1 00:44:18.522 00:44:18.522 ' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.522 --rc genhtml_branch_coverage=1 00:44:18.522 --rc genhtml_function_coverage=1 00:44:18.522 --rc genhtml_legend=1 00:44:18.522 --rc geninfo_all_blocks=1 00:44:18.522 --rc geninfo_unexecuted_blocks=1 00:44:18.522 00:44:18.522 ' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.522 --rc genhtml_branch_coverage=1 00:44:18.522 --rc genhtml_function_coverage=1 00:44:18.522 --rc genhtml_legend=1 00:44:18.522 --rc geninfo_all_blocks=1 00:44:18.522 --rc geninfo_unexecuted_blocks=1 00:44:18.522 00:44:18.522 ' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:44:18.522 13:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:44:26.653 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:26.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:26.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:26.654 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:26.654 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:26.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:26.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:44:26.654 00:44:26.654 --- 10.0.0.2 ping statistics --- 00:44:26.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:26.654 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:26.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:26.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:44:26.654 00:44:26.654 --- 10.0.0.1 ping statistics --- 00:44:26.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:26.654 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3755932 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3755932 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3755932 ']' 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:26.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:26.654 13:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.654 [2024-11-28 13:14:55.736413] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:26.654 [2024-11-28 13:14:55.737528] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:44:26.654 [2024-11-28 13:14:55.737579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:26.654 [2024-11-28 13:14:55.882136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:26.654 [2024-11-28 13:14:55.942192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:26.654 [2024-11-28 13:14:55.971657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:26.654 [2024-11-28 13:14:55.971705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:26.654 [2024-11-28 13:14:55.971713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:26.654 [2024-11-28 13:14:55.971720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:26.654 [2024-11-28 13:14:55.971726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:26.654 [2024-11-28 13:14:55.973929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:26.654 [2024-11-28 13:14:55.974088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:26.654 [2024-11-28 13:14:55.974250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.654 [2024-11-28 13:14:55.974250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:26.654 [2024-11-28 13:14:56.036592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:26.654 [2024-11-28 13:14:56.038103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:26.654 [2024-11-28 13:14:56.038275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:26.654 [2024-11-28 13:14:56.039106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:26.655 [2024-11-28 13:14:56.039147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 [2024-11-28 13:14:56.571078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 Malloc0 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 [2024-11-28 13:14:56.655350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:44:26.655 test case1: single bdev can't be used in multiple subsystems 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 [2024-11-28 13:14:56.690695] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:44:26.655 [2024-11-28 13:14:56.690716] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:44:26.655 [2024-11-28 13:14:56.690724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:26.655 request: 00:44:26.655 { 00:44:26.655 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:44:26.655 "namespace": { 00:44:26.655 "bdev_name": "Malloc0", 00:44:26.655 "no_auto_visible": false, 00:44:26.655 "hide_metadata": false 00:44:26.655 }, 00:44:26.655 "method": "nvmf_subsystem_add_ns", 00:44:26.655 "req_id": 1 00:44:26.655 } 00:44:26.655 Got JSON-RPC error response 00:44:26.655 response: 00:44:26.655 { 00:44:26.655 "code": -32602, 00:44:26.655 "message": "Invalid parameters" 00:44:26.655 } 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:44:26.655 Adding namespace failed - expected result. 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:44:26.655 test case2: host connect to nvmf target in multiple paths 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:26.655 [2024-11-28 13:14:56.702804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.655 13:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:27.227 13:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:44:27.488 13:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:44:27.488 13:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:44:27.488 13:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:44:27.488 13:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:44:27.488 13:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:44:29.403 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:44:29.403 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:44:29.403 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:44:29.690 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:44:29.690 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:44:29.690 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:44:29.690 13:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:29.690 [global] 00:44:29.690 thread=1 00:44:29.690 invalidate=1 00:44:29.690 rw=write 00:44:29.690 time_based=1 00:44:29.690 runtime=1 00:44:29.690 ioengine=libaio 00:44:29.690 direct=1 00:44:29.690 bs=4096 00:44:29.690 iodepth=1 00:44:29.690 norandommap=0 00:44:29.690 numjobs=1 00:44:29.690 00:44:29.691 verify_dump=1 00:44:29.691 verify_backlog=512 00:44:29.691 verify_state_save=0 00:44:29.691 do_verify=1 00:44:29.691 verify=crc32c-intel 00:44:29.691 [job0] 00:44:29.691 filename=/dev/nvme0n1 00:44:29.691 Could not set queue depth (nvme0n1) 00:44:29.957 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:29.957 fio-3.35 00:44:29.957 Starting 1 thread 00:44:31.341 00:44:31.341 job0: (groupid=0, jobs=1): err= 0: pid=3756898: Thu Nov 28 13:15:01 2024 00:44:31.341 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:44:31.341 slat (nsec): min=26783, max=28181, avg=27445.68, stdev=351.52 00:44:31.341 clat (usec): min=40607, max=41122, avg=40944.73, stdev=112.94 00:44:31.341 lat (usec): min=40634, max=41149, avg=40972.18, stdev=112.97 00:44:31.341 clat percentiles (usec): 00:44:31.341 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:44:31.341 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:31.341 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:31.341 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:44:31.341 | 99.99th=[41157] 00:44:31.341 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:44:31.341 slat (usec): min=10, max=27917, avg=81.29, stdev=1232.67 00:44:31.341 clat (usec): min=130, max=572, avg=356.72, stdev=83.33 00:44:31.341 lat (usec): min=141, max=28356, avg=438.00, stdev=1239.27 00:44:31.341 clat percentiles (usec): 00:44:31.341 | 1.00th=[ 192], 5.00th=[ 217], 10.00th=[ 243], 20.00th=[ 285], 00:44:31.341 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 379], 00:44:31.341 | 70.00th=[ 416], 80.00th=[ 441], 90.00th=[ 474], 95.00th=[ 486], 00:44:31.341 | 99.00th=[ 515], 99.50th=[ 519], 99.90th=[ 570], 99.95th=[ 570], 00:44:31.341 | 99.99th=[ 570] 00:44:31.341 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:44:31.341 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:31.341 lat (usec) : 250=12.24%, 500=82.11%, 750=2.07% 00:44:31.341 lat (msec) : 50=3.58% 00:44:31.341 cpu : usr=0.80%, sys=1.19%, ctx=534, majf=0, minf=1 00:44:31.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:31.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:31.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:31.341 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:31.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:31.341 00:44:31.341 Run status group 0 (all jobs): 00:44:31.341 READ: bw=75.5KiB/s (77.3kB/s), 75.5KiB/s-75.5KiB/s (77.3kB/s-77.3kB/s), io=76.0KiB (77.8kB), run=1007-1007msec 00:44:31.341 WRITE: bw=2034KiB/s (2083kB/s), 2034KiB/s-2034KiB/s (2083kB/s-2083kB/s), io=2048KiB (2097kB), run=1007-1007msec 00:44:31.341 00:44:31.341 Disk stats (read/write): 00:44:31.341 nvme0n1: ios=42/512, merge=0/0, ticks=1650/172, in_queue=1822, util=99.80% 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:31.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:31.341 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:31.341 rmmod nvme_tcp 00:44:31.341 rmmod nvme_fabrics 00:44:31.341 rmmod nvme_keyring 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3755932 ']' 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3755932 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3755932 ']' 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3755932 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3755932 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3755932' 00:44:31.342 killing process with pid 3755932 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3755932 00:44:31.342 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3755932 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:31.602 13:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:33.510 00:44:33.510 real 0m15.356s 00:44:33.510 user 0m32.816s 00:44:33.510 sys 0m7.197s 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:33.510 ************************************ 00:44:33.510 END TEST nvmf_nmic 00:44:33.510 ************************************ 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:33.510 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:33.771 ************************************ 00:44:33.771 START TEST nvmf_fio_target 00:44:33.771 ************************************ 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:44:33.771 * Looking for test storage... 00:44:33.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:33.771 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:33.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.771 --rc genhtml_branch_coverage=1 00:44:33.772 --rc genhtml_function_coverage=1 00:44:33.772 --rc genhtml_legend=1 00:44:33.772 --rc geninfo_all_blocks=1 00:44:33.772 --rc geninfo_unexecuted_blocks=1 00:44:33.772 00:44:33.772 ' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:33.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.772 --rc genhtml_branch_coverage=1 00:44:33.772 --rc genhtml_function_coverage=1 00:44:33.772 --rc genhtml_legend=1 00:44:33.772 --rc geninfo_all_blocks=1 00:44:33.772 --rc geninfo_unexecuted_blocks=1 00:44:33.772 00:44:33.772 ' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:33.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.772 --rc genhtml_branch_coverage=1 00:44:33.772 --rc genhtml_function_coverage=1 00:44:33.772 --rc genhtml_legend=1 00:44:33.772 --rc geninfo_all_blocks=1 00:44:33.772 --rc geninfo_unexecuted_blocks=1 00:44:33.772 00:44:33.772 ' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:33.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.772 --rc genhtml_branch_coverage=1 00:44:33.772 --rc genhtml_function_coverage=1 00:44:33.772 --rc genhtml_legend=1 00:44:33.772 --rc geninfo_all_blocks=1 00:44:33.772 --rc geninfo_unexecuted_blocks=1 00:44:33.772 00:44:33.772 ' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:33.772 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:34.033 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:34.033 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:34.033 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:44:34.033 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:42.170 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:42.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:42.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:42.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:42.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:42.171 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:42.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:42.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:44:42.172 00:44:42.172 --- 10.0.0.2 ping statistics --- 00:44:42.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:42.172 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:42.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:42.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:44:42.172 00:44:42.172 --- 10.0.0.1 ping statistics --- 00:44:42.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:42.172 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3761797 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3761797 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3761797 ']' 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:42.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:42.172 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.172 [2024-11-28 13:15:11.442951] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:42.172 [2024-11-28 13:15:11.444085] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:44:42.172 [2024-11-28 13:15:11.444138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:42.172 [2024-11-28 13:15:11.589043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:42.172 [2024-11-28 13:15:11.648055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:42.172 [2024-11-28 13:15:11.675943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:42.172 [2024-11-28 13:15:11.675989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:42.172 [2024-11-28 13:15:11.675997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:42.172 [2024-11-28 13:15:11.676005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:42.172 [2024-11-28 13:15:11.676010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:42.172 [2024-11-28 13:15:11.677883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:42.172 [2024-11-28 13:15:11.678041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:42.172 [2024-11-28 13:15:11.678243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.172 [2024-11-28 13:15:11.678243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:42.172 [2024-11-28 13:15:11.741778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:42.172 [2024-11-28 13:15:11.743085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:42.172 [2024-11-28 13:15:11.743174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:42.172 [2024-11-28 13:15:11.743969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:42.172 [2024-11-28 13:15:11.743974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:42.172 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:42.433 [2024-11-28 13:15:12.459123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:42.433 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:42.694 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:44:42.694 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:42.954 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:44:42.954 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.215 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:44:43.215 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.215 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:44:43.215 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:44:43.475 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.736 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:44:43.736 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.997 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:44:43.997 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.997 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:44:43.997 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:44:44.258 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:44.518 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:44.518 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:44.518 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:44.518 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:44:44.778 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:45.038 [2024-11-28 13:15:14.934903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:45.038 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:44:45.038 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:44:45.298 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:45.868 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:44:45.868 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:44:45.868 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:44:45.868 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:44:45.868 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:44:45.868 13:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:44:47.781 13:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:47.781 [global] 00:44:47.781 thread=1 00:44:47.781 invalidate=1 00:44:47.781 rw=write 00:44:47.781 time_based=1 00:44:47.781 runtime=1 00:44:47.781 ioengine=libaio 00:44:47.781 direct=1 00:44:47.781 bs=4096 00:44:47.781 iodepth=1 00:44:47.781 norandommap=0 00:44:47.781 numjobs=1 00:44:47.781 00:44:47.781 verify_dump=1 00:44:47.781 verify_backlog=512 00:44:47.781 verify_state_save=0 00:44:47.781 do_verify=1 00:44:47.781 verify=crc32c-intel 00:44:47.781 [job0] 00:44:47.781 filename=/dev/nvme0n1 00:44:47.781 [job1] 00:44:47.781 filename=/dev/nvme0n2 00:44:47.781 [job2] 00:44:47.781 filename=/dev/nvme0n3 00:44:47.781 [job3] 00:44:47.781 filename=/dev/nvme0n4 00:44:47.781 Could not set queue depth (nvme0n1) 00:44:47.781 Could not set queue depth (nvme0n2) 00:44:47.781 Could not set queue depth (nvme0n3) 00:44:47.781 Could not set queue depth (nvme0n4) 00:44:48.041 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:48.041 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:48.041 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:48.041 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:48.041 fio-3.35 00:44:48.041 Starting 4 threads 00:44:49.426 00:44:49.426 job0: (groupid=0, jobs=1): err= 0: pid=3763358: Thu Nov 28 13:15:19 2024 00:44:49.426 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:49.426 slat (nsec): min=24221, max=43864, avg=25513.00, stdev=3051.03 00:44:49.426 clat (usec): min=706, max=1506, avg=1217.02, stdev=111.24 00:44:49.426 lat (usec): min=732, max=1531, avg=1242.54, stdev=110.97 00:44:49.426 clat percentiles (usec): 00:44:49.426 | 1.00th=[ 914], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 00:44:49.426 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1254], 00:44:49.426 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1369], 00:44:49.426 | 99.00th=[ 1450], 99.50th=[ 1483], 99.90th=[ 1500], 99.95th=[ 1500], 00:44:49.426 | 99.99th=[ 1500] 00:44:49.426 write: IOPS=552, BW=2210KiB/s (2263kB/s)(2212KiB/1001msec); 0 zone resets 00:44:49.426 slat (nsec): min=9348, max=68040, avg=27430.58, stdev=10972.13 00:44:49.426 clat (usec): min=199, max=1028, avg=616.30, stdev=152.08 00:44:49.426 lat (usec): min=211, max=1041, avg=643.73, stdev=156.78 00:44:49.426 clat percentiles (usec): 00:44:49.426 | 1.00th=[ 314], 5.00th=[ 375], 10.00th=[ 416], 20.00th=[ 482], 00:44:49.426 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 652], 00:44:49.426 | 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 865], 00:44:49.426 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 1029], 99.95th=[ 1029], 00:44:49.426 | 99.99th=[ 1029] 00:44:49.426 bw ( KiB/s): min= 4096, max= 4096, per=46.09%, avg=4096.00, stdev= 0.00, samples=1 00:44:49.426 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:49.426 lat (usec) : 250=0.19%, 500=12.30%, 750=28.08%, 1000=12.86% 00:44:49.426 lat (msec) : 2=46.57% 00:44:49.427 cpu : usr=1.60%, sys=3.20%, ctx=1066, majf=0, minf=1 00:44:49.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:49.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 issued rwts: total=512,553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:49.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:49.427 job1: (groupid=0, jobs=1): err= 0: pid=3763372: Thu Nov 28 13:15:19 2024 00:44:49.427 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:49.427 slat (nsec): min=6921, max=64365, avg=28769.46, stdev=4639.98 00:44:49.427 clat (usec): min=555, max=1416, avg=1111.01, stdev=113.57 00:44:49.427 lat (usec): min=584, max=1444, avg=1139.77, stdev=113.81 00:44:49.427 clat percentiles (usec): 00:44:49.427 | 1.00th=[ 799], 5.00th=[ 898], 10.00th=[ 963], 20.00th=[ 1029], 00:44:49.427 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1139], 00:44:49.427 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1287], 00:44:49.427 | 99.00th=[ 1319], 99.50th=[ 1352], 99.90th=[ 1418], 99.95th=[ 1418], 00:44:49.427 | 99.99th=[ 1418] 00:44:49.427 write: IOPS=697, BW=2789KiB/s (2856kB/s)(2792KiB/1001msec); 0 zone resets 00:44:49.427 slat (nsec): min=9319, max=56126, avg=30570.44, stdev=11470.76 00:44:49.427 clat (usec): min=142, max=1065, avg=548.58, stdev=130.60 00:44:49.427 lat (usec): min=154, max=1101, avg=579.15, stdev=135.34 00:44:49.427 clat percentiles (usec): 00:44:49.427 | 1.00th=[ 285], 5.00th=[ 338], 10.00th=[ 371], 20.00th=[ 433], 00:44:49.427 | 30.00th=[ 478], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 586], 00:44:49.427 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 766], 00:44:49.427 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 1074], 99.95th=[ 1074], 00:44:49.427 | 99.99th=[ 1074] 00:44:49.427 bw ( KiB/s): min= 4096, max= 4096, per=46.09%, avg=4096.00, stdev= 0.00, samples=1 00:44:49.427 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:49.427 lat (usec) : 250=0.33%, 500=20.50%, 750=33.64%, 1000=9.83% 00:44:49.427 lat (msec) : 2=35.70% 00:44:49.427 cpu : usr=3.10%, sys=4.20%, ctx=1211, majf=0, minf=1 00:44:49.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:49.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 issued rwts: total=512,698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:49.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:49.427 job2: (groupid=0, jobs=1): err= 0: pid=3763383: Thu Nov 28 13:15:19 2024 00:44:49.427 read: IOPS=16, BW=66.4KiB/s (68.0kB/s)(68.0KiB/1024msec) 00:44:49.427 slat (nsec): min=26582, max=31679, avg=27449.41, stdev=1594.96 00:44:49.427 clat (usec): min=1035, max=42212, avg=39255.80, stdev=9859.49 00:44:49.427 lat (usec): min=1062, max=42238, avg=39283.25, stdev=9859.58 00:44:49.427 clat percentiles (usec): 00:44:49.427 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[40633], 20.00th=[41157], 00:44:49.427 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:44:49.427 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:49.427 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:49.427 | 99.99th=[42206] 00:44:49.427 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:44:49.427 slat (nsec): min=9279, max=67978, avg=30926.00, stdev=9308.93 00:44:49.427 clat (usec): min=221, max=1031, avg=658.21, stdev=123.18 00:44:49.427 lat (usec): min=238, max=1064, avg=689.14, stdev=127.08 00:44:49.427 clat percentiles (usec): 00:44:49.427 | 1.00th=[ 367], 5.00th=[ 445], 10.00th=[ 469], 20.00th=[ 570], 00:44:49.427 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:44:49.427 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 857], 00:44:49.427 | 99.00th=[ 922], 99.50th=[ 996], 99.90th=[ 1029], 99.95th=[ 1029], 00:44:49.427 | 99.99th=[ 1029] 00:44:49.427 bw ( KiB/s): min= 4096, max= 4096, per=46.09%, avg=4096.00, stdev= 0.00, samples=1 00:44:49.427 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:49.427 lat (usec) : 250=0.19%, 500=11.53%, 750=63.89%, 1000=20.79% 00:44:49.427 lat (msec) : 2=0.57%, 50=3.02% 00:44:49.427 cpu : usr=1.08%, sys=1.86%, ctx=529, majf=0, minf=2 00:44:49.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:49.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:49.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:49.427 job3: (groupid=0, jobs=1): err= 0: pid=3763384: Thu Nov 28 13:15:19 2024 00:44:49.427 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1014msec) 00:44:49.427 slat (nsec): min=8042, max=28191, avg=25611.12, stdev=6247.42 00:44:49.427 clat (usec): min=576, max=42104, avg=39485.93, stdev=10028.05 00:44:49.427 lat (usec): min=586, max=42132, avg=39511.54, stdev=10032.14 00:44:49.427 clat percentiles (usec): 00:44:49.427 | 1.00th=[ 578], 5.00th=[ 578], 10.00th=[41157], 20.00th=[41681], 00:44:49.427 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:44:49.427 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:49.427 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:49.427 | 99.99th=[42206] 00:44:49.427 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:44:49.427 slat (nsec): min=9808, max=55328, avg=32427.85, stdev=10251.76 00:44:49.427 clat (usec): min=275, max=992, avg=622.98, stdev=115.13 00:44:49.427 lat (usec): min=285, max=1028, avg=655.41, stdev=120.13 00:44:49.427 clat percentiles (usec): 00:44:49.427 | 1.00th=[ 326], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 529], 00:44:49.427 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:44:49.427 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 791], 00:44:49.427 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 996], 99.95th=[ 996], 00:44:49.427 | 99.99th=[ 996] 00:44:49.427 bw ( KiB/s): min= 4096, max= 4096, per=46.09%, avg=4096.00, stdev= 0.00, samples=1 00:44:49.427 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:49.427 lat (usec) : 500=13.80%, 750=69.19%, 1000=13.99% 00:44:49.427 lat (msec) : 50=3.02% 00:44:49.427 cpu : usr=1.48%, sys=1.68%, ctx=533, majf=0, minf=1 00:44:49.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:49.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:49.427 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:49.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:49.427 00:44:49.427 Run status group 0 (all jobs): 00:44:49.427 READ: bw=4133KiB/s (4232kB/s), 66.4KiB/s-2046KiB/s (68.0kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1024msec 00:44:49.427 WRITE: bw=8887KiB/s (9100kB/s), 2000KiB/s-2789KiB/s (2048kB/s-2856kB/s), io=9100KiB (9318kB), run=1001-1024msec 00:44:49.427 00:44:49.427 Disk stats (read/write): 00:44:49.427 nvme0n1: ios=452/512, merge=0/0, ticks=552/305, in_queue=857, util=87.58% 00:44:49.427 nvme0n2: ios=511/512, merge=0/0, ticks=1111/235, in_queue=1346, util=100.00% 00:44:49.427 nvme0n3: ios=12/512, merge=0/0, ticks=461/274, in_queue=735, util=88.47% 00:44:49.427 nvme0n4: ios=34/512, merge=0/0, ticks=1385/245, in_queue=1630, util=96.25% 00:44:49.427 13:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:44:49.427 [global] 00:44:49.427 thread=1 00:44:49.427 invalidate=1 00:44:49.427 rw=randwrite 00:44:49.427 time_based=1 00:44:49.427 runtime=1 00:44:49.427 ioengine=libaio 00:44:49.427 direct=1 00:44:49.427 bs=4096 00:44:49.427 iodepth=1 00:44:49.427 norandommap=0 00:44:49.427 numjobs=1 00:44:49.427 00:44:49.427 verify_dump=1 00:44:49.427 verify_backlog=512 00:44:49.427 verify_state_save=0 00:44:49.427 do_verify=1 00:44:49.427 verify=crc32c-intel 00:44:49.427 [job0] 00:44:49.427 filename=/dev/nvme0n1 00:44:49.427 [job1] 00:44:49.427 filename=/dev/nvme0n2 00:44:49.427 [job2] 00:44:49.427 filename=/dev/nvme0n3 00:44:49.427 [job3] 00:44:49.427 filename=/dev/nvme0n4 00:44:49.427 Could not set queue depth (nvme0n1) 00:44:49.427 Could not set queue depth (nvme0n2) 00:44:49.427 Could not set queue depth (nvme0n3) 00:44:49.427 Could not set queue depth (nvme0n4) 00:44:49.996 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.996 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.996 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.996 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.996 fio-3.35 00:44:49.996 Starting 4 threads 00:44:50.938 00:44:50.938 job0: (groupid=0, jobs=1): err= 0: pid=3763795: Thu Nov 28 13:15:21 2024 00:44:50.938 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:50.938 slat (nsec): min=25083, max=45597, avg=26696.72, stdev=2790.29 00:44:50.938 clat (usec): min=643, max=1417, avg=1071.05, stdev=108.33 00:44:50.939 lat (usec): min=669, max=1443, avg=1097.75, stdev=108.16 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 766], 5.00th=[ 857], 10.00th=[ 922], 20.00th=[ 988], 00:44:50.939 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:44:50.939 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:44:50.939 | 99.00th=[ 1287], 99.50th=[ 1336], 99.90th=[ 1418], 99.95th=[ 1418], 00:44:50.939 | 99.99th=[ 1418] 00:44:50.939 write: IOPS=712, BW=2849KiB/s (2918kB/s)(2852KiB/1001msec); 0 zone resets 00:44:50.939 slat (nsec): min=9392, max=61523, avg=30651.17, stdev=8721.27 00:44:50.939 clat (usec): min=135, max=986, avg=564.81, stdev=153.04 00:44:50.939 lat (usec): min=145, max=998, avg=595.46, stdev=155.41 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 161], 5.00th=[ 302], 10.00th=[ 355], 20.00th=[ 424], 00:44:50.939 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 578], 60.00th=[ 619], 00:44:50.939 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 791], 00:44:50.939 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 988], 99.95th=[ 988], 00:44:50.939 | 99.99th=[ 988] 00:44:50.939 bw ( KiB/s): min= 4096, max= 4096, per=34.79%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.939 lat (usec) : 250=1.71%, 500=17.22%, 750=33.71%, 1000=14.69% 00:44:50.939 lat (msec) : 2=32.65% 00:44:50.939 cpu : usr=2.00%, sys=3.60%, ctx=1229, majf=0, minf=1 00:44:50.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 issued rwts: total=512,713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.939 job1: (groupid=0, jobs=1): err= 0: pid=3763810: Thu Nov 28 13:15:21 2024 00:44:50.939 read: IOPS=651, BW=2605KiB/s (2668kB/s)(2608KiB/1001msec) 00:44:50.939 slat (nsec): min=6280, max=61533, avg=26300.13, stdev=6066.15 00:44:50.939 clat (usec): min=267, max=1073, avg=798.98, stdev=130.33 00:44:50.939 lat (usec): min=295, max=1100, avg=825.28, stdev=131.41 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 379], 5.00th=[ 545], 10.00th=[ 627], 20.00th=[ 701], 00:44:50.939 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 848], 00:44:50.939 | 70.00th=[ 881], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 971], 00:44:50.939 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1074], 99.95th=[ 1074], 00:44:50.939 | 99.99th=[ 1074] 00:44:50.939 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:44:50.939 slat (nsec): min=8886, max=52887, avg=29611.64, stdev=8969.70 00:44:50.939 clat (usec): min=117, max=1054, avg=409.21, stdev=115.76 00:44:50.939 lat (usec): min=130, max=1106, avg=438.82, stdev=117.90 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 127], 5.00th=[ 229], 10.00th=[ 277], 20.00th=[ 310], 00:44:50.939 | 30.00th=[ 338], 40.00th=[ 371], 50.00th=[ 404], 60.00th=[ 441], 00:44:50.939 | 70.00th=[ 474], 80.00th=[ 510], 90.00th=[ 562], 95.00th=[ 603], 00:44:50.939 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 766], 99.95th=[ 1057], 00:44:50.939 | 99.99th=[ 1057] 00:44:50.939 bw ( KiB/s): min= 4096, max= 4096, per=34.79%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.939 lat (usec) : 250=4.47%, 500=44.15%, 750=23.81%, 1000=26.79% 00:44:50.939 lat (msec) : 2=0.78% 00:44:50.939 cpu : usr=3.50%, sys=6.20%, ctx=1676, majf=0, minf=1 00:44:50.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 issued rwts: total=652,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.939 job2: (groupid=0, jobs=1): err= 0: pid=3763828: Thu Nov 28 13:15:21 2024 00:44:50.939 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:50.939 slat (nsec): min=7453, max=45542, avg=28072.16, stdev=2461.49 00:44:50.939 clat (usec): min=649, max=1305, avg=1011.90, stdev=115.96 00:44:50.939 lat (usec): min=676, max=1333, avg=1039.97, stdev=115.93 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 709], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 914], 00:44:50.939 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1045], 00:44:50.939 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:44:50.939 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:44:50.939 | 99.99th=[ 1303] 00:44:50.939 write: IOPS=696, BW=2785KiB/s (2852kB/s)(2788KiB/1001msec); 0 zone resets 00:44:50.939 slat (nsec): min=9382, max=72826, avg=31997.54, stdev=9532.57 00:44:50.939 clat (usec): min=154, max=1134, avg=619.79, stdev=148.14 00:44:50.939 lat (usec): min=172, max=1144, avg=651.79, stdev=151.22 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 243], 5.00th=[ 371], 10.00th=[ 437], 20.00th=[ 498], 00:44:50.939 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:44:50.939 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:44:50.939 | 99.00th=[ 963], 99.50th=[ 1037], 99.90th=[ 1139], 99.95th=[ 1139], 00:44:50.939 | 99.99th=[ 1139] 00:44:50.939 bw ( KiB/s): min= 4096, max= 4096, per=34.79%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.939 lat (usec) : 250=0.66%, 500=11.17%, 750=35.81%, 1000=26.47% 00:44:50.939 lat (msec) : 2=25.89% 00:44:50.939 cpu : usr=2.40%, sys=5.00%, ctx=1211, majf=0, minf=1 00:44:50.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 issued rwts: total=512,697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.939 job3: (groupid=0, jobs=1): err= 0: pid=3763834: Thu Nov 28 13:15:21 2024 00:44:50.939 read: IOPS=102, BW=412KiB/s (421kB/s)(412KiB/1001msec) 00:44:50.939 slat (nsec): min=26745, max=27961, avg=27086.21, stdev=237.13 00:44:50.939 clat (usec): min=690, max=42076, avg=6535.50, stdev=13871.24 00:44:50.939 lat (usec): min=717, max=42104, avg=6562.59, stdev=13871.30 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 938], 00:44:50.939 | 30.00th=[ 1012], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1156], 00:44:50.939 | 70.00th=[ 1205], 80.00th=[ 1287], 90.00th=[41157], 95.00th=[41157], 00:44:50.939 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:50.939 | 99.99th=[42206] 00:44:50.939 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:44:50.939 slat (nsec): min=9047, max=63165, avg=28942.97, stdev=9805.07 00:44:50.939 clat (usec): min=134, max=1249, avg=596.60, stdev=179.17 00:44:50.939 lat (usec): min=144, max=1281, avg=625.54, stdev=181.29 00:44:50.939 clat percentiles (usec): 00:44:50.939 | 1.00th=[ 161], 5.00th=[ 265], 10.00th=[ 326], 20.00th=[ 441], 00:44:50.939 | 30.00th=[ 519], 40.00th=[ 570], 50.00th=[ 619], 60.00th=[ 660], 00:44:50.939 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:44:50.939 | 99.00th=[ 955], 99.50th=[ 1037], 99.90th=[ 1254], 99.95th=[ 1254], 00:44:50.939 | 99.99th=[ 1254] 00:44:50.939 bw ( KiB/s): min= 4096, max= 4096, per=34.79%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.939 lat (usec) : 250=3.09%, 500=19.35%, 750=43.90%, 1000=21.14% 00:44:50.939 lat (msec) : 2=10.24%, 50=2.28% 00:44:50.939 cpu : usr=1.10%, sys=2.20%, ctx=615, majf=0, minf=2 00:44:50.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.939 issued rwts: total=103,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.939 00:44:50.939 Run status group 0 (all jobs): 00:44:50.939 READ: bw=7109KiB/s (7280kB/s), 412KiB/s-2605KiB/s (421kB/s-2668kB/s), io=7116KiB (7287kB), run=1001-1001msec 00:44:50.939 WRITE: bw=11.5MiB/s (12.1MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=11.5MiB (12.1MB), run=1001-1001msec 00:44:50.939 00:44:50.939 Disk stats (read/write): 00:44:50.939 nvme0n1: ios=532/512, merge=0/0, ticks=1217/246, in_queue=1463, util=97.80% 00:44:50.939 nvme0n2: ios=549/927, merge=0/0, ticks=400/291, in_queue=691, util=88.07% 00:44:50.939 nvme0n3: ios=497/512, merge=0/0, ticks=1349/245, in_queue=1594, util=96.73% 00:44:50.939 nvme0n4: ios=62/512, merge=0/0, ticks=503/246, in_queue=749, util=89.43% 00:44:50.939 13:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:44:51.200 [global] 00:44:51.200 thread=1 00:44:51.200 invalidate=1 00:44:51.200 rw=write 00:44:51.200 time_based=1 00:44:51.200 runtime=1 00:44:51.200 ioengine=libaio 00:44:51.200 direct=1 00:44:51.200 bs=4096 00:44:51.200 iodepth=128 00:44:51.201 norandommap=0 00:44:51.201 numjobs=1 00:44:51.201 00:44:51.201 verify_dump=1 00:44:51.201 verify_backlog=512 00:44:51.201 verify_state_save=0 00:44:51.201 do_verify=1 00:44:51.201 verify=crc32c-intel 00:44:51.201 [job0] 00:44:51.201 filename=/dev/nvme0n1 00:44:51.201 [job1] 00:44:51.201 filename=/dev/nvme0n2 00:44:51.201 [job2] 00:44:51.201 filename=/dev/nvme0n3 00:44:51.201 [job3] 00:44:51.201 filename=/dev/nvme0n4 00:44:51.201 Could not set queue depth (nvme0n1) 00:44:51.201 Could not set queue depth (nvme0n2) 00:44:51.201 Could not set queue depth (nvme0n3) 00:44:51.201 Could not set queue depth (nvme0n4) 00:44:51.462 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:51.462 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:51.462 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:51.462 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:51.462 fio-3.35 00:44:51.462 Starting 4 threads 00:44:52.848 00:44:52.848 job0: (groupid=0, jobs=1): err= 0: pid=3764248: Thu Nov 28 13:15:22 2024 00:44:52.848 read: IOPS=7026, BW=27.4MiB/s (28.8MB/s)(27.6MiB/1005msec) 00:44:52.848 slat (nsec): min=1005, max=9047.7k, avg=71984.07, stdev=558483.40 00:44:52.848 clat (usec): min=2496, max=28490, avg=9431.11, stdev=3265.02 00:44:52.848 lat (usec): min=3181, max=28497, avg=9503.09, stdev=3306.53 00:44:52.848 clat percentiles (usec): 00:44:52.848 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6718], 00:44:52.848 | 30.00th=[ 7570], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 9634], 00:44:52.848 | 70.00th=[10552], 80.00th=[11600], 90.00th=[13173], 95.00th=[16057], 00:44:52.848 | 99.00th=[19268], 99.50th=[25035], 99.90th=[27657], 99.95th=[28443], 00:44:52.848 | 99.99th=[28443] 00:44:52.848 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:44:52.848 slat (nsec): min=1710, max=8823.2k, avg=62793.06, stdev=473883.90 00:44:52.848 clat (usec): min=2829, max=28489, avg=8467.70, stdev=2956.58 00:44:52.848 lat (usec): min=2837, max=28498, avg=8530.49, stdev=2972.04 00:44:52.848 clat percentiles (usec): 00:44:52.848 | 1.00th=[ 3720], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 6128], 00:44:52.848 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7898], 60.00th=[ 8717], 00:44:52.848 | 70.00th=[10159], 80.00th=[10683], 90.00th=[12387], 95.00th=[14746], 00:44:52.848 | 99.00th=[17171], 99.50th=[18220], 99.90th=[22152], 99.95th=[22152], 00:44:52.848 | 99.99th=[28443] 00:44:52.848 bw ( KiB/s): min=27296, max=30048, per=27.48%, avg=28672.00, stdev=1945.96, samples=2 00:44:52.849 iops : min= 6824, max= 7512, avg=7168.00, stdev=486.49, samples=2 00:44:52.849 lat (msec) : 4=1.30%, 10=65.38%, 20=32.81%, 50=0.51% 00:44:52.849 cpu : usr=5.88%, sys=7.27%, ctx=366, majf=0, minf=1 00:44:52.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:52.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:52.849 issued rwts: total=7062,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:52.849 job1: (groupid=0, jobs=1): err= 0: pid=3764260: Thu Nov 28 13:15:22 2024 00:44:52.849 read: IOPS=8669, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1004msec) 00:44:52.849 slat (nsec): min=884, max=6384.5k, avg=56518.43, stdev=375826.89 00:44:52.849 clat (usec): min=4325, max=26322, avg=7441.61, stdev=1398.99 00:44:52.849 lat (usec): min=4330, max=26327, avg=7498.13, stdev=1423.54 00:44:52.849 clat percentiles (usec): 00:44:52.849 | 1.00th=[ 4817], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6652], 00:44:52.849 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:44:52.849 | 70.00th=[ 7635], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9765], 00:44:52.849 | 99.00th=[11994], 99.50th=[12911], 99.90th=[16712], 99.95th=[16712], 00:44:52.849 | 99.99th=[26346] 00:44:52.849 write: IOPS=8964, BW=35.0MiB/s (36.7MB/s)(35.2MiB/1004msec); 0 zone resets 00:44:52.849 slat (nsec): min=1536, max=5968.1k, avg=51665.45, stdev=278997.86 00:44:52.849 clat (usec): min=773, max=13302, avg=6949.99, stdev=1096.20 00:44:52.849 lat (usec): min=787, max=13304, avg=7001.66, stdev=1111.54 00:44:52.849 clat percentiles (usec): 00:44:52.849 | 1.00th=[ 3425], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6652], 00:44:52.849 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7177], 00:44:52.849 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 8717], 00:44:52.849 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[12780], 99.95th=[13304], 00:44:52.849 | 99.99th=[13304] 00:44:52.849 bw ( KiB/s): min=34120, max=36864, per=34.02%, avg=35492.00, stdev=1940.30, samples=2 00:44:52.849 iops : min= 8530, max= 9216, avg=8873.00, stdev=485.08, samples=2 00:44:52.849 lat (usec) : 1000=0.02% 00:44:52.849 lat (msec) : 2=0.18%, 4=0.89%, 10=96.53%, 20=2.37%, 50=0.02% 00:44:52.849 cpu : usr=5.68%, sys=7.08%, ctx=920, majf=0, minf=2 00:44:52.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:52.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:52.849 issued rwts: total=8704,9000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:52.849 job2: (groupid=0, jobs=1): err= 0: pid=3764275: Thu Nov 28 13:15:22 2024 00:44:52.849 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:44:52.849 slat (nsec): min=980, max=10599k, avg=75831.22, stdev=567936.74 00:44:52.849 clat (usec): min=2873, max=52343, avg=9201.02, stdev=3987.65 00:44:52.849 lat (usec): min=2878, max=52346, avg=9276.85, stdev=4046.59 00:44:52.849 clat percentiles (usec): 00:44:52.849 | 1.00th=[ 5080], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7177], 00:44:52.849 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8717], 00:44:52.849 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[12518], 95.00th=[14222], 00:44:52.849 | 99.00th=[24511], 99.50th=[38536], 99.90th=[51119], 99.95th=[52167], 00:44:52.849 | 99.99th=[52167] 00:44:52.849 write: IOPS=6524, BW=25.5MiB/s (26.7MB/s)(25.7MiB/1010msec); 0 zone resets 00:44:52.849 slat (nsec): min=1646, max=9229.7k, avg=76499.69, stdev=503005.22 00:44:52.849 clat (usec): min=1237, max=75301, avg=10864.45, stdev=9533.15 00:44:52.849 lat (usec): min=1249, max=75309, avg=10940.95, stdev=9579.71 00:44:52.849 clat percentiles (usec): 00:44:52.849 | 1.00th=[ 3228], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 6259], 00:44:52.849 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8225], 00:44:52.849 | 70.00th=[10290], 80.00th=[12911], 90.00th=[16450], 95.00th=[26084], 00:44:52.849 | 99.00th=[63177], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:44:52.849 | 99.99th=[74974] 00:44:52.849 bw ( KiB/s): min=23984, max=27712, per=24.78%, avg=25848.00, stdev=2636.09, samples=2 00:44:52.849 iops : min= 5996, max= 6928, avg=6462.00, stdev=659.02, samples=2 00:44:52.849 lat (msec) : 2=0.21%, 4=0.83%, 10=71.04%, 20=23.76%, 50=3.24% 00:44:52.849 lat (msec) : 100=0.93% 00:44:52.849 cpu : usr=4.46%, sys=6.64%, ctx=429, majf=0, minf=2 00:44:52.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:52.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:52.849 issued rwts: total=6144,6590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:52.849 job3: (groupid=0, jobs=1): err= 0: pid=3764281: Thu Nov 28 13:15:22 2024 00:44:52.849 read: IOPS=3430, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1006msec) 00:44:52.849 slat (nsec): min=925, max=18267k, avg=141144.90, stdev=1112111.45 00:44:52.849 clat (usec): min=3399, max=45542, avg=18685.40, stdev=9463.29 00:44:52.849 lat (usec): min=3444, max=49109, avg=18826.55, stdev=9558.19 00:44:52.849 clat percentiles (usec): 00:44:52.849 | 1.00th=[ 4555], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 8848], 00:44:52.849 | 30.00th=[ 9372], 40.00th=[13698], 50.00th=[18744], 60.00th=[21627], 00:44:52.849 | 70.00th=[24773], 80.00th=[28443], 90.00th=[31589], 95.00th=[34866], 00:44:52.849 | 99.00th=[41157], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:44:52.849 | 99.99th=[45351] 00:44:52.849 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:44:52.849 slat (nsec): min=1621, max=14045k, avg=126298.14, stdev=807805.26 00:44:52.849 clat (usec): min=646, max=44525, avg=17571.05, stdev=8558.22 00:44:52.849 lat (usec): min=903, max=44535, avg=17697.35, stdev=8627.00 00:44:52.849 clat percentiles (usec): 00:44:52.849 | 1.00th=[ 1631], 5.00th=[ 4555], 10.00th=[ 7177], 20.00th=[ 9765], 00:44:52.849 | 30.00th=[14484], 40.00th=[15270], 50.00th=[16188], 60.00th=[18744], 00:44:52.849 | 70.00th=[20055], 80.00th=[21890], 90.00th=[29492], 95.00th=[33817], 00:44:52.849 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:44:52.849 | 99.99th=[44303] 00:44:52.849 bw ( KiB/s): min=13000, max=15672, per=13.74%, avg=14336.00, stdev=1889.39, samples=2 00:44:52.849 iops : min= 3250, max= 3918, avg=3584.00, stdev=472.35, samples=2 00:44:52.849 lat (usec) : 750=0.01%, 1000=0.10% 00:44:52.849 lat (msec) : 2=0.48%, 4=1.07%, 10=25.26%, 20=36.35%, 50=36.73% 00:44:52.849 cpu : usr=2.69%, sys=3.98%, ctx=318, majf=0, minf=2 00:44:52.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:44:52.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:52.849 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:52.849 00:44:52.849 Run status group 0 (all jobs): 00:44:52.849 READ: bw=98.1MiB/s (103MB/s), 13.4MiB/s-33.9MiB/s (14.1MB/s-35.5MB/s), io=99.1MiB (104MB), run=1004-1010msec 00:44:52.849 WRITE: bw=102MiB/s (107MB/s), 13.9MiB/s-35.0MiB/s (14.6MB/s-36.7MB/s), io=103MiB (108MB), run=1004-1010msec 00:44:52.849 00:44:52.849 Disk stats (read/write): 00:44:52.849 nvme0n1: ios=5629/5639, merge=0/0, ticks=53442/48058, in_queue=101500, util=96.99% 00:44:52.849 nvme0n2: ios=7207/7647, merge=0/0, ticks=31940/29740, in_queue=61680, util=88.58% 00:44:52.849 nvme0n3: ios=5653/5783, merge=0/0, ticks=47700/53173, in_queue=100873, util=92.20% 00:44:52.849 nvme0n4: ios=2560/2758, merge=0/0, ticks=33712/30141, in_queue=63853, util=89.11% 00:44:52.849 13:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:44:52.849 [global] 00:44:52.849 thread=1 00:44:52.849 invalidate=1 00:44:52.849 rw=randwrite 00:44:52.849 time_based=1 00:44:52.849 runtime=1 00:44:52.849 ioengine=libaio 00:44:52.849 direct=1 00:44:52.849 bs=4096 00:44:52.849 iodepth=128 00:44:52.849 norandommap=0 00:44:52.849 numjobs=1 00:44:52.849 00:44:52.849 verify_dump=1 00:44:52.849 verify_backlog=512 00:44:52.849 verify_state_save=0 00:44:52.849 do_verify=1 00:44:52.849 verify=crc32c-intel 00:44:52.849 [job0] 00:44:52.849 filename=/dev/nvme0n1 00:44:52.849 [job1] 00:44:52.849 filename=/dev/nvme0n2 00:44:52.849 [job2] 00:44:52.849 filename=/dev/nvme0n3 00:44:52.849 [job3] 00:44:52.849 filename=/dev/nvme0n4 00:44:52.849 Could not set queue depth (nvme0n1) 00:44:52.849 Could not set queue depth (nvme0n2) 00:44:52.849 Could not set queue depth (nvme0n3) 00:44:52.849 Could not set queue depth (nvme0n4) 00:44:53.118 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:53.118 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:53.118 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:53.118 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:53.118 fio-3.35 00:44:53.119 Starting 4 threads 00:44:54.509 00:44:54.509 job0: (groupid=0, jobs=1): err= 0: pid=3764681: Thu Nov 28 13:15:24 2024 00:44:54.509 read: IOPS=7237, BW=28.3MiB/s (29.6MB/s)(28.5MiB/1008msec) 00:44:54.509 slat (nsec): min=911, max=13649k, avg=63927.80, stdev=530790.11 00:44:54.509 clat (usec): min=1598, max=26069, avg=8874.59, stdev=3038.63 00:44:54.509 lat (usec): min=1605, max=26094, avg=8938.52, stdev=3075.80 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 3228], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6456], 00:44:54.509 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 9110], 00:44:54.509 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12649], 95.00th=[14877], 00:44:54.509 | 99.00th=[18482], 99.50th=[18744], 99.90th=[20579], 99.95th=[22676], 00:44:54.509 | 99.99th=[26084] 00:44:54.509 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:44:54.509 slat (nsec): min=1503, max=9175.9k, avg=56215.55, stdev=435265.92 00:44:54.509 clat (usec): min=615, max=29607, avg=8225.94, stdev=4470.63 00:44:54.509 lat (usec): min=756, max=29616, avg=8282.15, stdev=4498.24 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 1860], 5.00th=[ 3556], 10.00th=[ 4113], 20.00th=[ 5276], 00:44:54.509 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7767], 00:44:54.509 | 70.00th=[ 8717], 80.00th=[10290], 90.00th=[13435], 95.00th=[15795], 00:44:54.509 | 99.00th=[27132], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:44:54.509 | 99.99th=[29492] 00:44:54.509 bw ( KiB/s): min=30536, max=30896, per=29.69%, avg=30716.00, stdev=254.56, samples=2 00:44:54.509 iops : min= 7634, max= 7724, avg=7679.00, stdev=63.64, samples=2 00:44:54.509 lat (usec) : 750=0.01%, 1000=0.11% 00:44:54.509 lat (msec) : 2=0.51%, 4=4.70%, 10=68.36%, 20=24.30%, 50=2.01% 00:44:54.509 cpu : usr=3.87%, sys=8.94%, ctx=525, majf=0, minf=1 00:44:54.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.509 issued rwts: total=7295,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.509 job1: (groupid=0, jobs=1): err= 0: pid=3764697: Thu Nov 28 13:15:24 2024 00:44:54.509 read: IOPS=5774, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1008msec) 00:44:54.509 slat (nsec): min=925, max=9126.0k, avg=67991.00, stdev=506051.72 00:44:54.509 clat (usec): min=1374, max=31827, avg=8766.69, stdev=4005.59 00:44:54.509 lat (usec): min=1381, max=31835, avg=8834.68, stdev=4044.36 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 3326], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6390], 00:44:54.509 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8094], 00:44:54.509 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[12911], 95.00th=[15008], 00:44:54.509 | 99.00th=[24773], 99.50th=[27657], 99.90th=[31851], 99.95th=[31851], 00:44:54.509 | 99.99th=[31851] 00:44:54.509 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:44:54.509 slat (nsec): min=1571, max=46851k, avg=93562.14, stdev=877272.99 00:44:54.509 clat (usec): min=1340, max=56228, avg=12468.55, stdev=11925.41 00:44:54.509 lat (usec): min=1348, max=56232, avg=12562.11, stdev=11994.00 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 2573], 5.00th=[ 4113], 10.00th=[ 4359], 20.00th=[ 5276], 00:44:54.509 | 30.00th=[ 6128], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 8356], 00:44:54.509 | 70.00th=[ 9503], 80.00th=[19792], 90.00th=[30540], 95.00th=[38011], 00:44:54.509 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[56361], 00:44:54.509 | 99.99th=[56361] 00:44:54.509 bw ( KiB/s): min=20480, max=28672, per=23.75%, avg=24576.00, stdev=5792.62, samples=2 00:44:54.509 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:44:54.509 lat (msec) : 2=0.46%, 4=2.87%, 10=71.58%, 20=12.87%, 50=10.76% 00:44:54.509 lat (msec) : 100=1.45% 00:44:54.509 cpu : usr=4.47%, sys=5.36%, ctx=416, majf=0, minf=1 00:44:54.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.509 issued rwts: total=5821,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.509 job2: (groupid=0, jobs=1): err= 0: pid=3764719: Thu Nov 28 13:15:24 2024 00:44:54.509 read: IOPS=7076, BW=27.6MiB/s (29.0MB/s)(27.7MiB/1003msec) 00:44:54.509 slat (nsec): min=983, max=9229.3k, avg=71626.88, stdev=558452.38 00:44:54.509 clat (usec): min=1436, max=28789, avg=9484.41, stdev=4022.11 00:44:54.509 lat (usec): min=1445, max=28795, avg=9556.04, stdev=4049.89 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 3097], 5.00th=[ 4752], 10.00th=[ 6194], 20.00th=[ 7046], 00:44:54.509 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:44:54.509 | 70.00th=[10159], 80.00th=[11600], 90.00th=[13829], 95.00th=[16909], 00:44:54.509 | 99.00th=[26346], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:44:54.509 | 99.99th=[28705] 00:44:54.509 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:44:54.509 slat (nsec): min=1616, max=7982.5k, avg=62212.28, stdev=500703.53 00:44:54.509 clat (usec): min=1168, max=26409, avg=8357.26, stdev=3260.06 00:44:54.509 lat (usec): min=1178, max=26411, avg=8419.47, stdev=3283.92 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 3425], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5604], 00:44:54.509 | 30.00th=[ 6718], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8356], 00:44:54.509 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[11600], 95.00th=[15533], 00:44:54.509 | 99.00th=[21103], 99.50th=[21627], 99.90th=[24249], 99.95th=[24773], 00:44:54.509 | 99.99th=[26346] 00:44:54.509 bw ( KiB/s): min=28672, max=28729, per=27.74%, avg=28700.50, stdev=40.31, samples=2 00:44:54.509 iops : min= 7168, max= 7182, avg=7175.00, stdev= 9.90, samples=2 00:44:54.509 lat (msec) : 2=0.15%, 4=2.22%, 10=71.13%, 20=24.08%, 50=2.42% 00:44:54.509 cpu : usr=5.09%, sys=7.39%, ctx=291, majf=0, minf=2 00:44:54.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.509 issued rwts: total=7098,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.509 job3: (groupid=0, jobs=1): err= 0: pid=3764726: Thu Nov 28 13:15:24 2024 00:44:54.509 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:44:54.509 slat (nsec): min=933, max=17585k, avg=108960.42, stdev=847451.96 00:44:54.509 clat (usec): min=4360, max=43770, avg=14086.29, stdev=7236.61 00:44:54.509 lat (usec): min=4363, max=43780, avg=14195.25, stdev=7308.27 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 5276], 5.00th=[ 6915], 10.00th=[ 7701], 20.00th=[ 8848], 00:44:54.509 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[11338], 60.00th=[13042], 00:44:54.509 | 70.00th=[15139], 80.00th=[20055], 90.00th=[23725], 95.00th=[30540], 00:44:54.509 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40109], 99.95th=[42206], 00:44:54.509 | 99.99th=[43779] 00:44:54.509 write: IOPS=5043, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1007msec); 0 zone resets 00:44:54.509 slat (nsec): min=1563, max=16552k, avg=93401.92, stdev=668307.80 00:44:54.509 clat (usec): min=762, max=40390, avg=12331.01, stdev=5909.48 00:44:54.509 lat (usec): min=771, max=40400, avg=12424.41, stdev=5958.38 00:44:54.509 clat percentiles (usec): 00:44:54.509 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 8356], 00:44:54.509 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10945], 00:44:54.509 | 70.00th=[13435], 80.00th=[17957], 90.00th=[21365], 95.00th=[26608], 00:44:54.509 | 99.00th=[27657], 99.50th=[27657], 99.90th=[36439], 99.95th=[37487], 00:44:54.509 | 99.99th=[40633] 00:44:54.509 bw ( KiB/s): min=18984, max=20624, per=19.14%, avg=19804.00, stdev=1159.66, samples=2 00:44:54.509 iops : min= 4746, max= 5156, avg=4951.00, stdev=289.91, samples=2 00:44:54.509 lat (usec) : 1000=0.08% 00:44:54.509 lat (msec) : 2=0.01%, 4=0.04%, 10=46.53%, 20=37.26%, 50=16.08% 00:44:54.509 cpu : usr=3.68%, sys=4.57%, ctx=344, majf=0, minf=1 00:44:54.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:44:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.509 issued rwts: total=4608,5079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.509 00:44:54.509 Run status group 0 (all jobs): 00:44:54.509 READ: bw=96.2MiB/s (101MB/s), 17.9MiB/s-28.3MiB/s (18.7MB/s-29.6MB/s), io=97.0MiB (102MB), run=1003-1008msec 00:44:54.509 WRITE: bw=101MiB/s (106MB/s), 19.7MiB/s-29.8MiB/s (20.7MB/s-31.2MB/s), io=102MiB (107MB), run=1003-1008msec 00:44:54.509 00:44:54.509 Disk stats (read/write): 00:44:54.509 nvme0n1: ios=5682/6056, merge=0/0, ticks=49486/51081, in_queue=100567, util=87.68% 00:44:54.509 nvme0n2: ios=5228/5632, merge=0/0, ticks=39334/45950, in_queue=85284, util=97.14% 00:44:54.509 nvme0n3: ios=5766/6144, merge=0/0, ticks=45130/40868, in_queue=85998, util=88.11% 00:44:54.509 nvme0n4: ios=3584/4042, merge=0/0, ticks=27042/25819, in_queue=52861, util=88.37% 00:44:54.510 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:44:54.510 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3764968 00:44:54.510 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:44:54.510 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:44:54.510 [global] 00:44:54.510 thread=1 00:44:54.510 invalidate=1 00:44:54.510 rw=read 00:44:54.510 time_based=1 00:44:54.510 runtime=10 00:44:54.510 ioengine=libaio 00:44:54.510 direct=1 00:44:54.510 bs=4096 00:44:54.510 iodepth=1 00:44:54.510 norandommap=1 00:44:54.510 numjobs=1 00:44:54.510 00:44:54.510 [job0] 00:44:54.510 filename=/dev/nvme0n1 00:44:54.510 [job1] 00:44:54.510 filename=/dev/nvme0n2 00:44:54.510 [job2] 00:44:54.510 filename=/dev/nvme0n3 00:44:54.510 [job3] 00:44:54.510 filename=/dev/nvme0n4 00:44:54.510 Could not set queue depth (nvme0n1) 00:44:54.510 Could not set queue depth (nvme0n2) 00:44:54.510 Could not set queue depth (nvme0n3) 00:44:54.510 Could not set queue depth (nvme0n4) 00:44:54.770 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:54.770 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:54.770 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:54.770 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:54.770 fio-3.35 00:44:54.770 Starting 4 threads 00:44:57.458 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:44:57.458 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9781248, buflen=4096 00:44:57.458 fio: pid=3765196, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:57.458 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:44:57.719 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10690560, buflen=4096 00:44:57.719 fio: pid=3765190, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:57.719 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:57.719 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:44:57.979 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:57.979 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:44:57.979 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:44:57.979 fio: pid=3765173, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:58.240 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:58.240 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:44:58.240 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2834432, buflen=4096 00:44:58.240 fio: pid=3765180, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:58.241 00:44:58.241 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3765173: Thu Nov 28 13:15:28 2024 00:44:58.241 read: IOPS=24, BW=95.4KiB/s (97.7kB/s)(284KiB/2976msec) 00:44:58.241 slat (usec): min=21, max=19670, avg=459.92, stdev=2672.87 00:44:58.241 clat (usec): min=708, max=42687, avg=41133.91, stdev=4887.61 00:44:58.241 lat (usec): min=772, max=61955, avg=41436.37, stdev=5470.45 00:44:58.241 clat percentiles (usec): 00:44:58.241 | 1.00th=[ 709], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:44:58.241 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:44:58.241 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:58.241 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:44:58.241 | 99.99th=[42730] 00:44:58.241 bw ( KiB/s): min= 96, max= 96, per=1.32%, avg=96.00, stdev= 0.00, samples=5 00:44:58.241 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:44:58.241 lat (usec) : 750=1.39% 00:44:58.241 lat (msec) : 50=97.22% 00:44:58.241 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:44:58.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:58.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:58.241 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3765180: Thu Nov 28 13:15:28 2024 00:44:58.241 read: IOPS=219, BW=876KiB/s (897kB/s)(2768KiB/3161msec) 00:44:58.241 slat (usec): min=6, max=19755, avg=94.55, stdev=980.08 00:44:58.241 clat (usec): min=580, max=42132, avg=4434.39, stdev=11130.52 00:44:58.241 lat (usec): min=606, max=54031, avg=4529.04, stdev=11263.28 00:44:58.241 clat percentiles (usec): 00:44:58.241 | 1.00th=[ 725], 5.00th=[ 881], 10.00th=[ 963], 20.00th=[ 1045], 00:44:58.241 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1172], 00:44:58.241 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1434], 95.00th=[41681], 00:44:58.241 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:58.241 | 99.99th=[42206] 00:44:58.241 bw ( KiB/s): min= 104, max= 2944, per=12.03%, avg=877.67, stdev=1146.81, samples=6 00:44:58.241 iops : min= 26, max= 736, avg=219.33, stdev=286.65, samples=6 00:44:58.241 lat (usec) : 750=1.59%, 1000=11.98% 00:44:58.241 lat (msec) : 2=78.07%, 10=0.14%, 50=8.08% 00:44:58.241 cpu : usr=0.35%, sys=0.54%, ctx=697, majf=0, minf=2 00:44:58.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 issued rwts: total=693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:58.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:58.241 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3765190: Thu Nov 28 13:15:28 2024 00:44:58.241 read: IOPS=938, BW=3753KiB/s (3843kB/s)(10.2MiB/2782msec) 00:44:58.241 slat (usec): min=7, max=17830, avg=36.58, stdev=408.35 00:44:58.241 clat (usec): min=611, max=1350, avg=1012.88, stdev=106.40 00:44:58.241 lat (usec): min=636, max=18927, avg=1049.46, stdev=424.51 00:44:58.241 clat percentiles (usec): 00:44:58.241 | 1.00th=[ 742], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 930], 00:44:58.241 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1057], 00:44:58.241 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1156], 00:44:58.241 | 99.00th=[ 1221], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1319], 00:44:58.241 | 99.99th=[ 1352] 00:44:58.241 bw ( KiB/s): min= 3696, max= 4056, per=52.72%, avg=3843.20, stdev=190.71, samples=5 00:44:58.241 iops : min= 924, max= 1014, avg=960.80, stdev=47.68, samples=5 00:44:58.241 lat (usec) : 750=1.49%, 1000=38.30% 00:44:58.241 lat (msec) : 2=60.17% 00:44:58.241 cpu : usr=0.68%, sys=3.16%, ctx=2613, majf=0, minf=2 00:44:58.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 issued rwts: total=2611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:58.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:58.241 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3765196: Thu Nov 28 13:15:28 2024 00:44:58.241 read: IOPS=920, BW=3682KiB/s (3771kB/s)(9552KiB/2594msec) 00:44:58.241 slat (nsec): min=6709, max=63514, avg=26074.81, stdev=4129.34 00:44:58.241 clat (usec): min=339, max=41690, avg=1042.45, stdev=844.00 00:44:58.241 lat (usec): min=366, max=41698, avg=1068.52, stdev=843.71 00:44:58.241 clat percentiles (usec): 00:44:58.241 | 1.00th=[ 594], 5.00th=[ 766], 10.00th=[ 840], 20.00th=[ 922], 00:44:58.241 | 30.00th=[ 979], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1074], 00:44:58.241 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:44:58.241 | 99.00th=[ 1352], 99.50th=[ 1434], 99.90th=[ 1598], 99.95th=[ 1614], 00:44:58.241 | 99.99th=[41681] 00:44:58.241 bw ( KiB/s): min= 3648, max= 3960, per=51.18%, avg=3731.20, stdev=129.94, samples=5 00:44:58.241 iops : min= 912, max= 990, avg=932.80, stdev=32.48, samples=5 00:44:58.241 lat (usec) : 500=0.46%, 750=3.68%, 1000=30.14% 00:44:58.241 lat (msec) : 2=65.63%, 50=0.04% 00:44:58.241 cpu : usr=0.96%, sys=2.85%, ctx=2389, majf=0, minf=2 00:44:58.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.241 issued rwts: total=2389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:58.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:58.241 00:44:58.241 Run status group 0 (all jobs): 00:44:58.241 READ: bw=7290KiB/s (7465kB/s), 95.4KiB/s-3753KiB/s (97.7kB/s-3843kB/s), io=22.5MiB (23.6MB), run=2594-3161msec 00:44:58.241 00:44:58.241 Disk stats (read/write): 00:44:58.241 nvme0n1: ios=68/0, merge=0/0, ticks=2796/0, in_queue=2796, util=94.12% 00:44:58.241 nvme0n2: ios=690/0, merge=0/0, ticks=2980/0, in_queue=2980, util=94.27% 00:44:58.241 nvme0n3: ios=2479/0, merge=0/0, ticks=2461/0, in_queue=2461, util=96.03% 00:44:58.241 nvme0n4: ios=2389/0, merge=0/0, ticks=2433/0, in_queue=2433, util=95.98% 00:44:58.241 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:58.241 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:44:58.502 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:58.502 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:44:58.763 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:58.763 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:44:58.763 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:58.763 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3764968 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:59.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:44:59.025 nvmf hotplug test: fio failed as expected 00:44:59.025 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:59.286 rmmod nvme_tcp 00:44:59.286 rmmod nvme_fabrics 00:44:59.286 rmmod nvme_keyring 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3761797 ']' 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3761797 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3761797 ']' 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3761797 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:59.286 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761797 00:44:59.546 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:59.546 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761797' 00:44:59.547 killing process with pid 3761797 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3761797 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3761797 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:59.547 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:02.109 00:45:02.109 real 0m27.996s 00:45:02.109 user 2m19.549s 00:45:02.109 sys 0m12.114s 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:45:02.109 ************************************ 00:45:02.109 END TEST nvmf_fio_target 00:45:02.109 ************************************ 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:02.109 ************************************ 00:45:02.109 START TEST nvmf_bdevio 00:45:02.109 ************************************ 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:45:02.109 * Looking for test storage... 00:45:02.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:02.109 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.110 --rc genhtml_branch_coverage=1 00:45:02.110 --rc genhtml_function_coverage=1 00:45:02.110 --rc genhtml_legend=1 00:45:02.110 --rc geninfo_all_blocks=1 00:45:02.110 --rc geninfo_unexecuted_blocks=1 00:45:02.110 00:45:02.110 ' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:02.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.110 --rc genhtml_branch_coverage=1 00:45:02.110 --rc genhtml_function_coverage=1 00:45:02.110 --rc genhtml_legend=1 00:45:02.110 --rc geninfo_all_blocks=1 00:45:02.110 --rc geninfo_unexecuted_blocks=1 00:45:02.110 00:45:02.110 ' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:02.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.110 --rc genhtml_branch_coverage=1 00:45:02.110 --rc genhtml_function_coverage=1 00:45:02.110 --rc genhtml_legend=1 00:45:02.110 --rc geninfo_all_blocks=1 00:45:02.110 --rc geninfo_unexecuted_blocks=1 00:45:02.110 00:45:02.110 ' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:02.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.110 --rc genhtml_branch_coverage=1 00:45:02.110 --rc genhtml_function_coverage=1 00:45:02.110 --rc genhtml_legend=1 00:45:02.110 --rc geninfo_all_blocks=1 00:45:02.110 --rc geninfo_unexecuted_blocks=1 00:45:02.110 00:45:02.110 ' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:45:02.110 13:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.250 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:10.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:10.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:10.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:10.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:10.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:10.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:45:10.251 00:45:10.251 --- 10.0.0.2 ping statistics --- 00:45:10.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.251 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:10.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:10.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:45:10.251 00:45:10.251 --- 10.0.0.1 ping statistics --- 00:45:10.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.251 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3770183 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3770183 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3770183 ']' 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:10.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:10.251 13:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.251 [2024-11-28 13:15:39.570566] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:10.251 [2024-11-28 13:15:39.571709] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:45:10.251 [2024-11-28 13:15:39.571762] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:10.251 [2024-11-28 13:15:39.717139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:10.251 [2024-11-28 13:15:39.777693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:10.251 [2024-11-28 13:15:39.805505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:10.252 [2024-11-28 13:15:39.805549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:10.252 [2024-11-28 13:15:39.805558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:10.252 [2024-11-28 13:15:39.805565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:10.252 [2024-11-28 13:15:39.805571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:10.252 [2024-11-28 13:15:39.807422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:45:10.252 [2024-11-28 13:15:39.807581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:45:10.252 [2024-11-28 13:15:39.807737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:10.252 [2024-11-28 13:15:39.807738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:45:10.252 [2024-11-28 13:15:39.869732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:10.252 [2024-11-28 13:15:39.871209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:10.252 [2024-11-28 13:15:39.871455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:10.252 [2024-11-28 13:15:39.872111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:10.252 [2024-11-28 13:15:39.872147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:10.512 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.513 [2024-11-28 13:15:40.432629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.513 Malloc0 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:10.513 [2024-11-28 13:15:40.524811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:10.513 { 00:45:10.513 "params": { 00:45:10.513 "name": "Nvme$subsystem", 00:45:10.513 "trtype": "$TEST_TRANSPORT", 00:45:10.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:10.513 "adrfam": "ipv4", 00:45:10.513 "trsvcid": "$NVMF_PORT", 00:45:10.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:10.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:10.513 "hdgst": ${hdgst:-false}, 00:45:10.513 "ddgst": ${ddgst:-false} 00:45:10.513 }, 00:45:10.513 "method": "bdev_nvme_attach_controller" 00:45:10.513 } 00:45:10.513 EOF 00:45:10.513 )") 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:45:10.513 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:10.513 "params": { 00:45:10.513 "name": "Nvme1", 00:45:10.513 "trtype": "tcp", 00:45:10.513 "traddr": "10.0.0.2", 00:45:10.513 "adrfam": "ipv4", 00:45:10.513 "trsvcid": "4420", 00:45:10.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:10.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:10.513 "hdgst": false, 00:45:10.513 "ddgst": false 00:45:10.513 }, 00:45:10.513 "method": "bdev_nvme_attach_controller" 00:45:10.513 }' 00:45:10.513 [2024-11-28 13:15:40.582054] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:45:10.513 [2024-11-28 13:15:40.582107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770530 ] 00:45:10.774 [2024-11-28 13:15:40.716092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:10.774 [2024-11-28 13:15:40.773707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:10.774 [2024-11-28 13:15:40.795653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:10.774 [2024-11-28 13:15:40.795806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:10.774 [2024-11-28 13:15:40.795807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:11.035 I/O targets: 00:45:11.035 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:45:11.035 00:45:11.035 00:45:11.035 CUnit - A unit testing framework for C - Version 2.1-3 00:45:11.035 http://cunit.sourceforge.net/ 00:45:11.035 00:45:11.035 00:45:11.035 Suite: bdevio tests on: Nvme1n1 00:45:11.035 Test: blockdev write read block ...passed 00:45:11.295 Test: blockdev write zeroes read block ...passed 00:45:11.295 Test: blockdev write zeroes read no split ...passed 00:45:11.295 Test: blockdev write zeroes read split ...passed 00:45:11.295 Test: blockdev write zeroes read split partial ...passed 00:45:11.296 Test: blockdev reset ...[2024-11-28 13:15:41.220702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:45:11.296 [2024-11-28 13:15:41.220803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2419c10 (9): Bad file descriptor 00:45:11.296 [2024-11-28 13:15:41.228260] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:45:11.296 passed 00:45:11.296 Test: blockdev write read 8 blocks ...passed 00:45:11.296 Test: blockdev write read size > 128k ...passed 00:45:11.296 Test: blockdev write read invalid size ...passed 00:45:11.296 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:11.296 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:11.296 Test: blockdev write read max offset ...passed 00:45:11.296 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:11.296 Test: blockdev writev readv 8 blocks ...passed 00:45:11.296 Test: blockdev writev readv 30 x 1block ...passed 00:45:11.296 Test: blockdev writev readv block ...passed 00:45:11.296 Test: blockdev writev readv size > 128k ...passed 00:45:11.296 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:11.296 Test: blockdev comparev and writev ...[2024-11-28 13:15:41.412459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.412516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.412534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.412543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.413172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.413188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.413202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.413211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.413805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.413825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.413839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.413847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.414441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.414455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:11.296 [2024-11-28 13:15:41.414469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:11.296 [2024-11-28 13:15:41.414477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:45:11.557 passed 00:45:11.557 Test: blockdev nvme passthru rw ...passed 00:45:11.557 Test: blockdev nvme passthru vendor specific ...[2024-11-28 13:15:41.500002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:11.557 [2024-11-28 13:15:41.500019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:45:11.557 [2024-11-28 13:15:41.500401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:11.557 [2024-11-28 13:15:41.500414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:45:11.557 [2024-11-28 13:15:41.500764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:11.557 [2024-11-28 13:15:41.500776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:45:11.557 [2024-11-28 13:15:41.501167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:11.557 [2024-11-28 13:15:41.501180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:45:11.557 passed 00:45:11.557 Test: blockdev nvme admin passthru ...passed 00:45:11.557 Test: blockdev copy ...passed 00:45:11.557 00:45:11.557 Run Summary: Type Total Ran Passed Failed Inactive 00:45:11.557 suites 1 1 n/a 0 0 00:45:11.557 tests 23 23 23 0 0 00:45:11.557 asserts 152 152 152 0 n/a 00:45:11.557 00:45:11.557 Elapsed time = 0.949 seconds 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:45:11.818 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:11.819 rmmod nvme_tcp 00:45:11.819 rmmod nvme_fabrics 00:45:11.819 rmmod nvme_keyring 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3770183 ']' 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3770183 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3770183 ']' 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3770183 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3770183 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3770183' 00:45:11.819 killing process with pid 3770183 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3770183 00:45:11.819 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3770183 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:12.080 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:13.992 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:13.992 00:45:13.992 real 0m12.327s 00:45:13.992 user 0m9.830s 00:45:13.992 sys 0m6.476s 00:45:13.992 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:13.992 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:13.992 ************************************ 00:45:13.992 END TEST nvmf_bdevio 00:45:13.992 ************************************ 00:45:13.992 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:45:13.992 00:45:13.992 real 4m58.673s 00:45:13.992 user 10m16.026s 00:45:13.992 sys 2m3.673s 00:45:13.992 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:13.992 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:13.992 ************************************ 00:45:13.992 END TEST nvmf_target_core_interrupt_mode 00:45:13.992 ************************************ 00:45:14.253 13:15:44 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:14.253 13:15:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:14.253 13:15:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:14.253 13:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:14.253 ************************************ 00:45:14.253 START TEST nvmf_interrupt 00:45:14.253 ************************************ 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:14.253 * Looking for test storage... 00:45:14.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:45:14.253 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:14.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:14.514 --rc genhtml_branch_coverage=1 00:45:14.514 --rc genhtml_function_coverage=1 00:45:14.514 --rc genhtml_legend=1 00:45:14.514 --rc geninfo_all_blocks=1 00:45:14.514 --rc geninfo_unexecuted_blocks=1 00:45:14.514 00:45:14.514 ' 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:14.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:14.514 --rc genhtml_branch_coverage=1 00:45:14.514 --rc genhtml_function_coverage=1 00:45:14.514 --rc genhtml_legend=1 00:45:14.514 --rc geninfo_all_blocks=1 00:45:14.514 --rc geninfo_unexecuted_blocks=1 00:45:14.514 00:45:14.514 ' 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:14.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:14.514 --rc genhtml_branch_coverage=1 00:45:14.514 --rc genhtml_function_coverage=1 00:45:14.514 --rc genhtml_legend=1 00:45:14.514 --rc geninfo_all_blocks=1 00:45:14.514 --rc geninfo_unexecuted_blocks=1 00:45:14.514 00:45:14.514 ' 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:14.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:14.514 --rc genhtml_branch_coverage=1 00:45:14.514 --rc genhtml_function_coverage=1 00:45:14.514 --rc genhtml_legend=1 00:45:14.514 --rc geninfo_all_blocks=1 00:45:14.514 --rc geninfo_unexecuted_blocks=1 00:45:14.514 00:45:14.514 ' 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:45:14.514 13:15:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:45:14.515 13:15:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:45:22.654 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:22.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:22.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:22.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:22.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:22.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:22.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:45:22.655 00:45:22.655 --- 10.0.0.2 ping statistics --- 00:45:22.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:22.655 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:22.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:22.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:45:22.655 00:45:22.655 --- 10.0.0.1 ping statistics --- 00:45:22.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:22.655 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3774882 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3774882 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3774882 ']' 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:22.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:22.655 13:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.655 [2024-11-28 13:15:51.760296] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:22.656 [2024-11-28 13:15:51.761432] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:45:22.656 [2024-11-28 13:15:51.761482] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:22.656 [2024-11-28 13:15:51.905735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:22.656 [2024-11-28 13:15:51.965569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:22.656 [2024-11-28 13:15:51.992331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:22.656 [2024-11-28 13:15:51.992377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:22.656 [2024-11-28 13:15:51.992385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:22.656 [2024-11-28 13:15:51.992393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:22.656 [2024-11-28 13:15:51.992399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:22.656 [2024-11-28 13:15:51.993973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:22.656 [2024-11-28 13:15:51.993975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.656 [2024-11-28 13:15:52.058993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:22.656 [2024-11-28 13:15:52.059591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:22.656 [2024-11-28 13:15:52.059904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:45:22.656 5000+0 records in 00:45:22.656 5000+0 records out 00:45:22.656 10240000 bytes (10 MB, 9.8 MiB) copied, 0.018884 s, 542 MB/s 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.656 AIO0 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.656 [2024-11-28 13:15:52.691007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:22.656 [2024-11-28 13:15:52.735384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3774882 0 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3774882 0 idle 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:22.656 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774882 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0' 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774882 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3774882 1 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3774882 1 idle 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:22.917 13:15:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774888 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774888 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3775138 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3774882 0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3774882 0 busy 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774882 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0' 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774882 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:23.178 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:23.438 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:23.438 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:23.438 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:23.438 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:23.438 13:15:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774882 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.63 reactor_0' 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774882 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.63 reactor_0 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3774882 1 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3774882 1 busy 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:24.380 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774888 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.36 reactor_1' 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774888 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.36 reactor_1 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:24.641 13:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3775138 00:45:34.636 Initializing NVMe Controllers 00:45:34.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:34.636 Controller IO queue size 256, less than required. 00:45:34.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:34.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:45:34.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:45:34.636 Initialization complete. Launching workers. 00:45:34.636 ======================================================== 00:45:34.636 Latency(us) 00:45:34.636 Device Information : IOPS MiB/s Average min max 00:45:34.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19166.80 74.87 13361.41 3229.52 32505.01 00:45:34.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19860.90 77.58 12891.73 7277.00 28821.44 00:45:34.636 ======================================================== 00:45:34.636 Total : 39027.70 152.45 13122.39 3229.52 32505.01 00:45:34.636 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3774882 0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3774882 0 idle 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774882 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.23 reactor_0' 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774882 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.23 reactor_0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3774882 1 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3774882 1 idle 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774888 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.98 reactor_1' 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774888 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.98 reactor_1 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:34.636 13:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:34.636 13:16:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:45:34.636 13:16:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:45:34.636 13:16:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:45:34.636 13:16:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:45:34.636 13:16:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:45:36.545 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:45:36.545 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:45:36.545 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:45:36.545 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:45:36.545 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3774882 0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3774882 0 idle 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774882 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.57 reactor_0' 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774882 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.57 reactor_0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3774882 1 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3774882 1 idle 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3774882 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3774882 -w 256 00:45:36.546 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3774888 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.10 reactor_1' 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3774888 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.10 reactor_1 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:36.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:36.806 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:45:36.807 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:45:36.807 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:36.807 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:45:36.807 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:37.067 rmmod nvme_tcp 00:45:37.067 rmmod nvme_fabrics 00:45:37.067 rmmod nvme_keyring 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:37.067 13:16:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3774882 ']' 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3774882 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3774882 ']' 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3774882 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3774882 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3774882' 00:45:37.067 killing process with pid 3774882 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3774882 00:45:37.067 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3774882 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:37.328 13:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:39.240 13:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:39.240 00:45:39.240 real 0m25.121s 00:45:39.240 user 0m40.220s 00:45:39.240 sys 0m9.414s 00:45:39.240 13:16:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:39.240 13:16:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:39.240 ************************************ 00:45:39.240 END TEST nvmf_interrupt 00:45:39.240 ************************************ 00:45:39.240 00:45:39.240 real 38m30.785s 00:45:39.240 user 92m13.456s 00:45:39.240 sys 11m28.217s 00:45:39.240 13:16:09 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:39.240 13:16:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:39.240 ************************************ 00:45:39.240 END TEST nvmf_tcp 00:45:39.240 ************************************ 00:45:39.500 13:16:09 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:45:39.500 13:16:09 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:39.500 13:16:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:39.500 13:16:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:39.500 13:16:09 -- common/autotest_common.sh@10 -- # set +x 00:45:39.500 ************************************ 00:45:39.500 START TEST spdkcli_nvmf_tcp 00:45:39.500 ************************************ 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:39.500 * Looking for test storage... 00:45:39.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:39.500 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.501 --rc genhtml_branch_coverage=1 00:45:39.501 --rc genhtml_function_coverage=1 00:45:39.501 --rc genhtml_legend=1 00:45:39.501 --rc geninfo_all_blocks=1 00:45:39.501 --rc geninfo_unexecuted_blocks=1 00:45:39.501 00:45:39.501 ' 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.501 --rc genhtml_branch_coverage=1 00:45:39.501 --rc genhtml_function_coverage=1 00:45:39.501 --rc genhtml_legend=1 00:45:39.501 --rc geninfo_all_blocks=1 00:45:39.501 --rc geninfo_unexecuted_blocks=1 00:45:39.501 00:45:39.501 ' 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.501 --rc genhtml_branch_coverage=1 00:45:39.501 --rc genhtml_function_coverage=1 00:45:39.501 --rc genhtml_legend=1 00:45:39.501 --rc geninfo_all_blocks=1 00:45:39.501 --rc geninfo_unexecuted_blocks=1 00:45:39.501 00:45:39.501 ' 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:39.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.501 --rc genhtml_branch_coverage=1 00:45:39.501 --rc genhtml_function_coverage=1 00:45:39.501 --rc genhtml_legend=1 00:45:39.501 --rc geninfo_all_blocks=1 00:45:39.501 --rc geninfo_unexecuted_blocks=1 00:45:39.501 00:45:39.501 ' 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:39.501 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:39.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3778401 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3778401 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3778401 ']' 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:39.762 13:16:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:39.763 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:39.763 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:39.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:39.763 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:39.763 13:16:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:39.763 [2024-11-28 13:16:09.716234] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:45:39.763 [2024-11-28 13:16:09.716290] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778401 ] 00:45:39.763 [2024-11-28 13:16:09.849344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:40.023 [2024-11-28 13:16:09.907436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:40.023 [2024-11-28 13:16:09.938048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:40.023 [2024-11-28 13:16:09.938054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.624 13:16:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:40.624 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:40.624 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:40.624 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:40.625 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:40.625 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:40.625 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:40.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:40.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:40.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:40.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:40.625 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:40.625 ' 00:45:43.174 [2024-11-28 13:16:13.267228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:44.560 [2024-11-28 13:16:14.624329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:45:47.104 [2024-11-28 13:16:17.149496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:45:49.653 [2024-11-28 13:16:19.378604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:45:51.036 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:45:51.036 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:45:51.036 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:45:51.036 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:45:51.036 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:45:51.036 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:45:51.036 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:45:51.036 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:51.036 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:51.036 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:45:51.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:45:51.036 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:45:51.296 13:16:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:45:51.557 13:16:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:45:51.557 13:16:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:45:51.557 13:16:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:45:51.557 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:51.557 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:51.818 13:16:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:45:51.818 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:51.818 13:16:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:51.818 13:16:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:45:51.818 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:45:51.818 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:51.818 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:45:51.818 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:45:51.818 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:45:51.819 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:45:51.819 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:51.819 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:45:51.819 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:45:51.819 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:45:51.819 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:45:51.819 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:45:51.819 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:45:51.819 ' 00:45:58.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:58.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:58.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:58.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:58.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:58.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:58.403 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:58.403 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:58.403 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:58.403 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:58.403 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:58.403 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:58.403 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:58.403 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3778401 ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3778401' 00:45:58.403 killing process with pid 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3778401 ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3778401 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3778401 ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3778401 00:45:58.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3778401) - No such process 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3778401 is not found' 00:45:58.403 Process with pid 3778401 is not found 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:58.403 00:45:58.403 real 0m18.160s 00:45:58.403 user 0m40.162s 00:45:58.403 sys 0m0.920s 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:58.403 13:16:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:58.403 ************************************ 00:45:58.403 END TEST spdkcli_nvmf_tcp 00:45:58.403 ************************************ 00:45:58.403 13:16:27 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:58.403 13:16:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:58.403 13:16:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:58.403 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:45:58.403 ************************************ 00:45:58.403 START TEST nvmf_identify_passthru 00:45:58.403 ************************************ 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:58.403 * Looking for test storage... 00:45:58.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:58.403 13:16:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:58.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.403 --rc genhtml_branch_coverage=1 00:45:58.403 --rc genhtml_function_coverage=1 00:45:58.403 --rc genhtml_legend=1 00:45:58.403 --rc geninfo_all_blocks=1 00:45:58.403 --rc geninfo_unexecuted_blocks=1 00:45:58.403 00:45:58.403 ' 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:58.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.403 --rc genhtml_branch_coverage=1 00:45:58.403 --rc genhtml_function_coverage=1 00:45:58.403 --rc genhtml_legend=1 00:45:58.403 --rc geninfo_all_blocks=1 00:45:58.403 --rc geninfo_unexecuted_blocks=1 00:45:58.403 00:45:58.403 ' 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:58.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.403 --rc genhtml_branch_coverage=1 00:45:58.403 --rc genhtml_function_coverage=1 00:45:58.403 --rc genhtml_legend=1 00:45:58.403 --rc geninfo_all_blocks=1 00:45:58.403 --rc geninfo_unexecuted_blocks=1 00:45:58.403 00:45:58.403 ' 00:45:58.403 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:58.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.403 --rc genhtml_branch_coverage=1 00:45:58.403 --rc genhtml_function_coverage=1 00:45:58.403 --rc genhtml_legend=1 00:45:58.403 --rc geninfo_all_blocks=1 00:45:58.403 --rc geninfo_unexecuted_blocks=1 00:45:58.403 00:45:58.403 ' 00:45:58.403 13:16:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:58.403 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:58.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:58.404 13:16:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:58.404 13:16:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:58.404 13:16:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.404 13:16:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:58.404 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:58.404 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:58.404 13:16:27 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:45:58.404 13:16:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:04.986 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:04.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:04.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:04.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:04.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:04.987 13:16:34 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:04.987 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:04.987 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:04.987 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:04.987 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:04.987 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:05.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:05.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:46:05.247 00:46:05.247 --- 10.0.0.2 ping statistics --- 00:46:05.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:05.247 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:05.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:05.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:46:05.247 00:46:05.247 --- 10.0.0.1 ping statistics --- 00:46:05.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:05.247 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:05.247 13:16:35 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:46:05.247 13:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:46:05.247 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:46:05.907 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:46:05.908 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:46:05.908 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:46:05.908 13:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3785607 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:46:06.524 13:16:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3785607 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3785607 ']' 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:06.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:06.524 13:16:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:06.524 [2024-11-28 13:16:36.568252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:46:06.524 [2024-11-28 13:16:36.568326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:06.786 [2024-11-28 13:16:36.712102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:06.786 [2024-11-28 13:16:36.771919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:06.786 [2024-11-28 13:16:36.800831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:06.786 [2024-11-28 13:16:36.800880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:06.786 [2024-11-28 13:16:36.800888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:06.786 [2024-11-28 13:16:36.800895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:06.786 [2024-11-28 13:16:36.800901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:06.786 [2024-11-28 13:16:36.803084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:06.786 [2024-11-28 13:16:36.803241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:06.786 [2024-11-28 13:16:36.803526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:06.786 [2024-11-28 13:16:36.803528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:46:07.358 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.358 INFO: Log level set to 20 00:46:07.358 INFO: Requests: 00:46:07.358 { 00:46:07.358 "jsonrpc": "2.0", 00:46:07.358 "method": "nvmf_set_config", 00:46:07.358 "id": 1, 00:46:07.358 "params": { 00:46:07.358 "admin_cmd_passthru": { 00:46:07.358 "identify_ctrlr": true 00:46:07.358 } 00:46:07.358 } 00:46:07.358 } 00:46:07.358 00:46:07.358 INFO: response: 00:46:07.358 { 00:46:07.358 "jsonrpc": "2.0", 00:46:07.358 "id": 1, 00:46:07.358 "result": true 00:46:07.358 } 00:46:07.358 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.358 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.358 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.358 INFO: Setting log level to 20 00:46:07.358 INFO: Setting log level to 20 00:46:07.358 INFO: Log level set to 20 00:46:07.358 INFO: Log level set to 20 00:46:07.358 INFO: Requests: 00:46:07.358 { 00:46:07.358 "jsonrpc": "2.0", 00:46:07.358 "method": "framework_start_init", 00:46:07.358 "id": 1 00:46:07.358 } 00:46:07.358 00:46:07.358 INFO: Requests: 00:46:07.358 { 00:46:07.358 "jsonrpc": "2.0", 00:46:07.358 "method": "framework_start_init", 00:46:07.358 "id": 1 00:46:07.358 } 00:46:07.358 00:46:07.619 [2024-11-28 13:16:37.484069] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:46:07.619 INFO: response: 00:46:07.619 { 00:46:07.619 "jsonrpc": "2.0", 00:46:07.619 "id": 1, 00:46:07.619 "result": true 00:46:07.619 } 00:46:07.619 00:46:07.619 INFO: response: 00:46:07.619 { 00:46:07.619 "jsonrpc": "2.0", 00:46:07.619 "id": 1, 00:46:07.619 "result": true 00:46:07.619 } 00:46:07.619 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.619 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.619 INFO: Setting log level to 40 00:46:07.619 INFO: Setting log level to 40 00:46:07.619 INFO: Setting log level to 40 00:46:07.619 [2024-11-28 13:16:37.497428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.619 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.619 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.619 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.880 Nvme0n1 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.880 [2024-11-28 13:16:37.888351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.880 [ 00:46:07.880 { 00:46:07.880 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:46:07.880 "subtype": "Discovery", 00:46:07.880 "listen_addresses": [], 00:46:07.880 "allow_any_host": true, 00:46:07.880 "hosts": [] 00:46:07.880 }, 00:46:07.880 { 00:46:07.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:07.880 "subtype": "NVMe", 00:46:07.880 "listen_addresses": [ 00:46:07.880 { 00:46:07.880 "trtype": "TCP", 00:46:07.880 "adrfam": "IPv4", 00:46:07.880 "traddr": "10.0.0.2", 00:46:07.880 "trsvcid": "4420" 00:46:07.880 } 00:46:07.880 ], 00:46:07.880 "allow_any_host": true, 00:46:07.880 "hosts": [], 00:46:07.880 "serial_number": "SPDK00000000000001", 00:46:07.880 "model_number": "SPDK bdev Controller", 00:46:07.880 "max_namespaces": 1, 00:46:07.880 "min_cntlid": 1, 00:46:07.880 "max_cntlid": 65519, 00:46:07.880 "namespaces": [ 00:46:07.880 { 00:46:07.880 "nsid": 1, 00:46:07.880 "bdev_name": "Nvme0n1", 00:46:07.880 "name": "Nvme0n1", 00:46:07.880 "nguid": "36344730526054870025384500000044", 00:46:07.880 "uuid": "36344730-5260-5487-0025-384500000044" 00:46:07.880 } 00:46:07.880 ] 00:46:07.880 } 00:46:07.880 ] 00:46:07.880 13:16:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:46:07.880 13:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:46:08.451 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:46:08.451 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:08.451 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:46:08.451 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:46:08.712 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:46:08.712 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:46:08.712 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:46:08.712 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.712 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:46:08.712 13:16:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:08.712 rmmod nvme_tcp 00:46:08.712 rmmod nvme_fabrics 00:46:08.712 rmmod nvme_keyring 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3785607 ']' 00:46:08.712 13:16:38 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3785607 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3785607 ']' 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3785607 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:46:08.712 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:08.713 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785607 00:46:08.974 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:08.974 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:08.974 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785607' 00:46:08.974 killing process with pid 3785607 00:46:08.974 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3785607 00:46:08.974 13:16:38 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3785607 00:46:08.974 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:08.974 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:08.974 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:08.974 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:46:09.235 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:46:09.235 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:09.235 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:46:09.235 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:09.235 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:09.235 13:16:39 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:09.235 13:16:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:09.235 13:16:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:11.148 13:16:41 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:11.148 00:46:11.148 real 0m13.521s 00:46:11.148 user 0m11.331s 00:46:11.148 sys 0m6.625s 00:46:11.148 13:16:41 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:11.148 13:16:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:11.148 ************************************ 00:46:11.148 END TEST nvmf_identify_passthru 00:46:11.148 ************************************ 00:46:11.148 13:16:41 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:46:11.148 13:16:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:11.148 13:16:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:11.148 13:16:41 -- common/autotest_common.sh@10 -- # set +x 00:46:11.148 ************************************ 00:46:11.148 START TEST nvmf_dif 00:46:11.148 ************************************ 00:46:11.148 13:16:41 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:46:11.410 * Looking for test storage... 00:46:11.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.410 --rc genhtml_branch_coverage=1 00:46:11.410 --rc genhtml_function_coverage=1 00:46:11.410 --rc genhtml_legend=1 00:46:11.410 --rc geninfo_all_blocks=1 00:46:11.410 --rc geninfo_unexecuted_blocks=1 00:46:11.410 00:46:11.410 ' 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.410 --rc genhtml_branch_coverage=1 00:46:11.410 --rc genhtml_function_coverage=1 00:46:11.410 --rc genhtml_legend=1 00:46:11.410 --rc geninfo_all_blocks=1 00:46:11.410 --rc geninfo_unexecuted_blocks=1 00:46:11.410 00:46:11.410 ' 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.410 --rc genhtml_branch_coverage=1 00:46:11.410 --rc genhtml_function_coverage=1 00:46:11.410 --rc genhtml_legend=1 00:46:11.410 --rc geninfo_all_blocks=1 00:46:11.410 --rc geninfo_unexecuted_blocks=1 00:46:11.410 00:46:11.410 ' 00:46:11.410 13:16:41 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.410 --rc genhtml_branch_coverage=1 00:46:11.410 --rc genhtml_function_coverage=1 00:46:11.410 --rc genhtml_legend=1 00:46:11.410 --rc geninfo_all_blocks=1 00:46:11.410 --rc geninfo_unexecuted_blocks=1 00:46:11.410 00:46:11.410 ' 00:46:11.410 13:16:41 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:11.410 13:16:41 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:11.410 13:16:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.410 13:16:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.410 13:16:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.410 13:16:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:46:11.410 13:16:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:46:11.410 13:16:41 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:11.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:11.411 13:16:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:46:11.411 13:16:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:46:11.411 13:16:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:46:11.411 13:16:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:46:11.411 13:16:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:11.411 13:16:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:11.411 13:16:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:11.411 13:16:41 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:46:11.411 13:16:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:19.553 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:19.553 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:19.553 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:19.553 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:19.553 13:16:48 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:19.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:19.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:46:19.554 00:46:19.554 --- 10.0.0.2 ping statistics --- 00:46:19.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:19.554 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:19.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:19.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:46:19.554 00:46:19.554 --- 10.0.0.1 ping statistics --- 00:46:19.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:19.554 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:19.554 13:16:48 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:22.854 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:46:22.854 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:22.854 13:16:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:46:22.854 13:16:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3791741 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3791741 00:46:22.854 13:16:52 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3791741 ']' 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:22.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:22.854 13:16:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:22.854 [2024-11-28 13:16:52.795576] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:46:22.854 [2024-11-28 13:16:52.795627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:22.854 [2024-11-28 13:16:52.934244] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:23.116 [2024-11-28 13:16:52.992990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.116 [2024-11-28 13:16:53.011641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:23.116 [2024-11-28 13:16:53.011673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:23.116 [2024-11-28 13:16:53.011681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:23.116 [2024-11-28 13:16:53.011687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:23.116 [2024-11-28 13:16:53.011693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:23.116 [2024-11-28 13:16:53.012249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:46:23.688 13:16:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:23.688 13:16:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:23.688 13:16:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:46:23.688 13:16:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:23.688 [2024-11-28 13:16:53.643253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.688 13:16:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:23.688 13:16:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:23.688 ************************************ 00:46:23.688 START TEST fio_dif_1_default 00:46:23.688 ************************************ 00:46:23.688 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:46:23.688 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:46:23.688 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:46:23.688 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:46:23.688 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.689 bdev_null0 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.689 [2024-11-28 13:16:53.735511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:23.689 { 00:46:23.689 "params": { 00:46:23.689 "name": "Nvme$subsystem", 00:46:23.689 "trtype": "$TEST_TRANSPORT", 00:46:23.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:23.689 "adrfam": "ipv4", 00:46:23.689 "trsvcid": "$NVMF_PORT", 00:46:23.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:23.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:23.689 "hdgst": ${hdgst:-false}, 00:46:23.689 "ddgst": ${ddgst:-false} 00:46:23.689 }, 00:46:23.689 "method": "bdev_nvme_attach_controller" 00:46:23.689 } 00:46:23.689 EOF 00:46:23.689 )") 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:23.689 "params": { 00:46:23.689 "name": "Nvme0", 00:46:23.689 "trtype": "tcp", 00:46:23.689 "traddr": "10.0.0.2", 00:46:23.689 "adrfam": "ipv4", 00:46:23.689 "trsvcid": "4420", 00:46:23.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:23.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:23.689 "hdgst": false, 00:46:23.689 "ddgst": false 00:46:23.689 }, 00:46:23.689 "method": "bdev_nvme_attach_controller" 00:46:23.689 }' 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:46:23.689 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:23.992 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:23.992 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:23.992 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:23.992 13:16:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:24.252 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:24.252 fio-3.35 00:46:24.252 Starting 1 thread 00:46:36.480 00:46:36.480 filename0: (groupid=0, jobs=1): err= 0: pid=3792275: Thu Nov 28 13:17:04 2024 00:46:36.480 read: IOPS=192, BW=768KiB/s (787kB/s)(7696KiB/10019msec) 00:46:36.481 slat (nsec): min=5522, max=32711, avg=6415.97, stdev=1845.85 00:46:36.481 clat (usec): min=598, max=42927, avg=20811.16, stdev=20158.38 00:46:36.481 lat (usec): min=604, max=42953, avg=20817.57, stdev=20158.33 00:46:36.481 clat percentiles (usec): 00:46:36.481 | 1.00th=[ 717], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 840], 00:46:36.481 | 30.00th=[ 857], 40.00th=[ 906], 50.00th=[ 1057], 60.00th=[41157], 00:46:36.481 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:36.481 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:46:36.481 | 99.99th=[42730] 00:46:36.481 bw ( KiB/s): min= 704, max= 832, per=99.98%, avg=768.00, stdev=27.47, samples=20 00:46:36.481 iops : min= 176, max= 208, avg=192.00, stdev= 6.87, samples=20 00:46:36.481 lat (usec) : 750=2.60%, 1000=46.83% 00:46:36.481 lat (msec) : 2=0.88%, 4=0.21%, 50=49.48% 00:46:36.481 cpu : usr=93.68%, sys=6.11%, ctx=13, majf=0, minf=207 00:46:36.481 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:36.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:36.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:36.481 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:36.481 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:36.481 00:46:36.481 Run status group 0 (all jobs): 00:46:36.481 READ: bw=768KiB/s (787kB/s), 768KiB/s-768KiB/s (787kB/s-787kB/s), io=7696KiB (7881kB), run=10019-10019msec 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 00:46:36.481 real 0m11.261s 00:46:36.481 user 0m20.258s 00:46:36.481 sys 0m1.004s 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:36.481 13:17:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 ************************************ 00:46:36.481 END TEST fio_dif_1_default 00:46:36.481 ************************************ 00:46:36.481 13:17:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:46:36.481 13:17:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:36.481 13:17:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:36.481 13:17:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 ************************************ 00:46:36.481 START TEST fio_dif_1_multi_subsystems 00:46:36.481 ************************************ 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 bdev_null0 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 [2024-11-28 13:17:05.074831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 bdev_null1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:36.481 { 00:46:36.481 "params": { 00:46:36.481 "name": "Nvme$subsystem", 00:46:36.481 "trtype": "$TEST_TRANSPORT", 00:46:36.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:36.481 "adrfam": "ipv4", 00:46:36.481 "trsvcid": "$NVMF_PORT", 00:46:36.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:36.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:36.481 "hdgst": ${hdgst:-false}, 00:46:36.481 "ddgst": ${ddgst:-false} 00:46:36.481 }, 00:46:36.481 "method": "bdev_nvme_attach_controller" 00:46:36.481 } 00:46:36.481 EOF 00:46:36.481 )") 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:36.481 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:36.482 { 00:46:36.482 "params": { 00:46:36.482 "name": "Nvme$subsystem", 00:46:36.482 "trtype": "$TEST_TRANSPORT", 00:46:36.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:36.482 "adrfam": "ipv4", 00:46:36.482 "trsvcid": "$NVMF_PORT", 00:46:36.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:36.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:36.482 "hdgst": ${hdgst:-false}, 00:46:36.482 "ddgst": ${ddgst:-false} 00:46:36.482 }, 00:46:36.482 "method": "bdev_nvme_attach_controller" 00:46:36.482 } 00:46:36.482 EOF 00:46:36.482 )") 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:36.482 "params": { 00:46:36.482 "name": "Nvme0", 00:46:36.482 "trtype": "tcp", 00:46:36.482 "traddr": "10.0.0.2", 00:46:36.482 "adrfam": "ipv4", 00:46:36.482 "trsvcid": "4420", 00:46:36.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:36.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:36.482 "hdgst": false, 00:46:36.482 "ddgst": false 00:46:36.482 }, 00:46:36.482 "method": "bdev_nvme_attach_controller" 00:46:36.482 },{ 00:46:36.482 "params": { 00:46:36.482 "name": "Nvme1", 00:46:36.482 "trtype": "tcp", 00:46:36.482 "traddr": "10.0.0.2", 00:46:36.482 "adrfam": "ipv4", 00:46:36.482 "trsvcid": "4420", 00:46:36.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:36.482 "hdgst": false, 00:46:36.482 "ddgst": false 00:46:36.482 }, 00:46:36.482 "method": "bdev_nvme_attach_controller" 00:46:36.482 }' 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:36.482 13:17:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:36.482 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:36.482 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:36.482 fio-3.35 00:46:36.482 Starting 2 threads 00:46:46.476 00:46:46.476 filename0: (groupid=0, jobs=1): err= 0: pid=3794613: Thu Nov 28 13:17:16 2024 00:46:46.476 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:46:46.476 slat (nsec): min=5529, max=37232, avg=6416.38, stdev=1876.54 00:46:46.476 clat (usec): min=573, max=43049, avg=21084.17, stdev=20149.50 00:46:46.476 lat (usec): min=581, max=43055, avg=21090.59, stdev=20149.61 00:46:46.476 clat percentiles (usec): 00:46:46.476 | 1.00th=[ 627], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 857], 00:46:46.476 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:46:46.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:46.476 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:46:46.476 | 99.99th=[43254] 00:46:46.476 bw ( KiB/s): min= 672, max= 768, per=65.77%, avg=759.58, stdev=25.78, samples=19 00:46:46.476 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:46:46.476 lat (usec) : 750=2.32%, 1000=47.26% 00:46:46.476 lat (msec) : 2=0.21%, 50=50.21% 00:46:46.476 cpu : usr=95.30%, sys=4.49%, ctx=14, majf=0, minf=151 00:46:46.476 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:46.476 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:46.476 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:46.476 filename1: (groupid=0, jobs=1): err= 0: pid=3794614: Thu Nov 28 13:17:16 2024 00:46:46.476 read: IOPS=99, BW=398KiB/s (408kB/s)(4000KiB/10038msec) 00:46:46.476 slat (nsec): min=5531, max=33335, avg=6599.24, stdev=2370.41 00:46:46.476 clat (usec): min=661, max=43004, avg=40130.70, stdev=6701.34 00:46:46.476 lat (usec): min=669, max=43013, avg=40137.30, stdev=6700.63 00:46:46.476 clat percentiles (usec): 00:46:46.476 | 1.00th=[ 693], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:46:46.476 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:46:46.476 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:46.476 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:46:46.476 | 99.99th=[43254] 00:46:46.476 bw ( KiB/s): min= 352, max= 480, per=34.49%, avg=398.40, stdev=30.22, samples=20 00:46:46.476 iops : min= 88, max= 120, avg=99.60, stdev= 7.56, samples=20 00:46:46.476 lat (usec) : 750=2.00%, 1000=0.40% 00:46:46.476 lat (msec) : 2=0.40%, 50=97.20% 00:46:46.476 cpu : usr=95.51%, sys=4.28%, ctx=9, majf=0, minf=153 00:46:46.476 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:46.476 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:46.476 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:46.476 00:46:46.476 Run status group 0 (all jobs): 00:46:46.476 READ: bw=1154KiB/s (1182kB/s), 398KiB/s-758KiB/s (408kB/s-776kB/s), io=11.3MiB (11.9MB), run=10003-10038msec 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.476 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.477 00:46:46.477 real 0m11.548s 00:46:46.477 user 0m35.302s 00:46:46.477 sys 0m1.211s 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:46.477 13:17:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.477 ************************************ 00:46:46.477 END TEST fio_dif_1_multi_subsystems 00:46:46.477 ************************************ 00:46:46.738 13:17:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:46.738 13:17:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:46.738 13:17:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:46.738 13:17:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:46.738 ************************************ 00:46:46.738 START TEST fio_dif_rand_params 00:46:46.738 ************************************ 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.738 bdev_null0 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.738 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.739 [2024-11-28 13:17:16.707773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:46.739 { 00:46:46.739 "params": { 00:46:46.739 "name": "Nvme$subsystem", 00:46:46.739 "trtype": "$TEST_TRANSPORT", 00:46:46.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:46.739 "adrfam": "ipv4", 00:46:46.739 "trsvcid": "$NVMF_PORT", 00:46:46.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:46.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:46.739 "hdgst": ${hdgst:-false}, 00:46:46.739 "ddgst": ${ddgst:-false} 00:46:46.739 }, 00:46:46.739 "method": "bdev_nvme_attach_controller" 00:46:46.739 } 00:46:46.739 EOF 00:46:46.739 )") 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:46.739 "params": { 00:46:46.739 "name": "Nvme0", 00:46:46.739 "trtype": "tcp", 00:46:46.739 "traddr": "10.0.0.2", 00:46:46.739 "adrfam": "ipv4", 00:46:46.739 "trsvcid": "4420", 00:46:46.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:46.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:46.739 "hdgst": false, 00:46:46.739 "ddgst": false 00:46:46.739 }, 00:46:46.739 "method": "bdev_nvme_attach_controller" 00:46:46.739 }' 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:46.739 13:17:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:47.321 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:47.321 ... 00:46:47.321 fio-3.35 00:46:47.321 Starting 3 threads 00:46:53.907 00:46:53.907 filename0: (groupid=0, jobs=1): err= 0: pid=3796983: Thu Nov 28 13:17:22 2024 00:46:53.907 read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(216MiB/5048msec) 00:46:53.907 slat (nsec): min=5554, max=44451, avg=6208.95, stdev=1075.37 00:46:53.907 clat (usec): min=4346, max=87269, avg=8724.16, stdev=6403.71 00:46:53.907 lat (usec): min=4352, max=87276, avg=8730.37, stdev=6403.90 00:46:53.907 clat percentiles (usec): 00:46:53.907 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6521], 00:46:53.907 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8094], 00:46:53.907 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[10814], 00:46:53.907 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[87557], 00:46:53.907 | 99.99th=[87557] 00:46:53.907 bw ( KiB/s): min=26112, max=49664, per=42.56%, avg=44211.20, stdev=7363.48, samples=10 00:46:53.907 iops : min= 204, max= 388, avg=345.40, stdev=57.53, samples=10 00:46:53.907 lat (msec) : 10=89.30%, 20=8.39%, 50=2.08%, 100=0.23% 00:46:53.907 cpu : usr=93.52%, sys=6.18%, ctx=55, majf=0, minf=123 00:46:53.907 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:53.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.907 issued rwts: total=1729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:53.907 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:53.907 filename0: (groupid=0, jobs=1): err= 0: pid=3796984: Thu Nov 28 13:17:22 2024 00:46:53.907 read: IOPS=151, BW=18.9MiB/s (19.8MB/s)(95.2MiB/5045msec) 00:46:53.907 slat (nsec): min=5545, max=31894, avg=8534.76, stdev=1766.91 00:46:53.907 clat (msec): min=4, max=132, avg=19.79, stdev=22.89 00:46:53.907 lat (msec): min=4, max=132, avg=19.80, stdev=22.89 00:46:53.907 clat percentiles (msec): 00:46:53.907 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:46:53.907 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:46:53.907 | 70.00th=[ 11], 80.00th=[ 48], 90.00th=[ 51], 95.00th=[ 53], 00:46:53.907 | 99.00th=[ 92], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 133], 00:46:53.907 | 99.99th=[ 133] 00:46:53.907 bw ( KiB/s): min=13824, max=30720, per=18.73%, avg=19456.00, stdev=5820.21, samples=10 00:46:53.907 iops : min= 108, max= 240, avg=152.00, stdev=45.47, samples=10 00:46:53.907 lat (msec) : 10=69.29%, 20=7.87%, 50=11.55%, 100=10.76%, 250=0.52% 00:46:53.907 cpu : usr=96.00%, sys=3.77%, ctx=7, majf=0, minf=68 00:46:53.907 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:53.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.907 issued rwts: total=762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:53.907 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:53.907 filename0: (groupid=0, jobs=1): err= 0: pid=3796985: Thu Nov 28 13:17:22 2024 00:46:53.907 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(201MiB/5045msec) 00:46:53.907 slat (nsec): min=5544, max=28774, avg=8305.08, stdev=1319.62 00:46:53.907 clat (usec): min=5050, max=86788, avg=9386.55, stdev=6528.94 00:46:53.907 lat (usec): min=5057, max=86795, avg=9394.85, stdev=6529.00 00:46:53.907 clat percentiles (usec): 00:46:53.907 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7242], 00:46:53.907 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:46:53.907 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11207], 00:46:53.907 | 99.00th=[48497], 99.50th=[49021], 99.90th=[51119], 99.95th=[86508], 00:46:53.907 | 99.99th=[86508] 00:46:53.907 bw ( KiB/s): min=26368, max=47360, per=39.53%, avg=41062.40, stdev=6043.87, samples=10 00:46:53.907 iops : min= 206, max= 370, avg=320.80, stdev=47.22, samples=10 00:46:53.907 lat (msec) : 10=85.24%, 20=12.27%, 50=2.30%, 100=0.19% 00:46:53.907 cpu : usr=94.65%, sys=5.11%, ctx=9, majf=0, minf=100 00:46:53.907 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:53.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.907 issued rwts: total=1606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:53.907 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:53.907 00:46:53.907 Run status group 0 (all jobs): 00:46:53.907 READ: bw=101MiB/s (106MB/s), 18.9MiB/s-42.8MiB/s (19.8MB/s-44.9MB/s), io=512MiB (537MB), run=5045-5048msec 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.907 bdev_null0 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.907 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 [2024-11-28 13:17:22.976209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 bdev_null1 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 bdev_null2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:53.908 { 00:46:53.908 "params": { 00:46:53.908 "name": "Nvme$subsystem", 00:46:53.908 "trtype": "$TEST_TRANSPORT", 00:46:53.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:53.908 "adrfam": "ipv4", 00:46:53.908 "trsvcid": "$NVMF_PORT", 00:46:53.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:53.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:53.908 "hdgst": ${hdgst:-false}, 00:46:53.908 "ddgst": ${ddgst:-false} 00:46:53.908 }, 00:46:53.908 "method": "bdev_nvme_attach_controller" 00:46:53.908 } 00:46:53.908 EOF 00:46:53.908 )") 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:53.908 { 00:46:53.908 "params": { 00:46:53.908 "name": "Nvme$subsystem", 00:46:53.908 "trtype": "$TEST_TRANSPORT", 00:46:53.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:53.908 "adrfam": "ipv4", 00:46:53.908 "trsvcid": "$NVMF_PORT", 00:46:53.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:53.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:53.908 "hdgst": ${hdgst:-false}, 00:46:53.908 "ddgst": ${ddgst:-false} 00:46:53.908 }, 00:46:53.908 "method": "bdev_nvme_attach_controller" 00:46:53.908 } 00:46:53.908 EOF 00:46:53.908 )") 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:53.908 { 00:46:53.908 "params": { 00:46:53.908 "name": "Nvme$subsystem", 00:46:53.908 "trtype": "$TEST_TRANSPORT", 00:46:53.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:53.908 "adrfam": "ipv4", 00:46:53.908 "trsvcid": "$NVMF_PORT", 00:46:53.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:53.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:53.908 "hdgst": ${hdgst:-false}, 00:46:53.908 "ddgst": ${ddgst:-false} 00:46:53.908 }, 00:46:53.908 "method": "bdev_nvme_attach_controller" 00:46:53.908 } 00:46:53.908 EOF 00:46:53.908 )") 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:53.908 13:17:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:53.908 "params": { 00:46:53.908 "name": "Nvme0", 00:46:53.908 "trtype": "tcp", 00:46:53.908 "traddr": "10.0.0.2", 00:46:53.908 "adrfam": "ipv4", 00:46:53.908 "trsvcid": "4420", 00:46:53.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:53.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:53.908 "hdgst": false, 00:46:53.908 "ddgst": false 00:46:53.908 }, 00:46:53.908 "method": "bdev_nvme_attach_controller" 00:46:53.908 },{ 00:46:53.908 "params": { 00:46:53.908 "name": "Nvme1", 00:46:53.908 "trtype": "tcp", 00:46:53.908 "traddr": "10.0.0.2", 00:46:53.908 "adrfam": "ipv4", 00:46:53.908 "trsvcid": "4420", 00:46:53.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:53.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:53.908 "hdgst": false, 00:46:53.908 "ddgst": false 00:46:53.908 }, 00:46:53.909 "method": "bdev_nvme_attach_controller" 00:46:53.909 },{ 00:46:53.909 "params": { 00:46:53.909 "name": "Nvme2", 00:46:53.909 "trtype": "tcp", 00:46:53.909 "traddr": "10.0.0.2", 00:46:53.909 "adrfam": "ipv4", 00:46:53.909 "trsvcid": "4420", 00:46:53.909 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:53.909 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:53.909 "hdgst": false, 00:46:53.909 "ddgst": false 00:46:53.909 }, 00:46:53.909 "method": "bdev_nvme_attach_controller" 00:46:53.909 }' 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:53.909 13:17:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:53.909 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:53.909 ... 00:46:53.909 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:53.909 ... 00:46:53.909 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:53.909 ... 00:46:53.909 fio-3.35 00:46:53.909 Starting 24 threads 00:47:06.147 00:47:06.147 filename0: (groupid=0, jobs=1): err= 0: pid=3798303: Thu Nov 28 13:17:34 2024 00:47:06.147 read: IOPS=685, BW=2741KiB/s (2806kB/s)(26.8MiB/10018msec) 00:47:06.147 slat (nsec): min=5699, max=62967, avg=11615.01, stdev=6869.10 00:47:06.147 clat (usec): min=811, max=26353, avg=23254.67, stdev=3997.94 00:47:06.147 lat (usec): min=824, max=26378, avg=23266.28, stdev=3997.00 00:47:06.147 clat percentiles (usec): 00:47:06.147 | 1.00th=[ 1483], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:47:06.147 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:47:06.147 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:47:06.147 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25035], 99.95th=[26084], 00:47:06.147 | 99.99th=[26346] 00:47:06.147 bw ( KiB/s): min= 2554, max= 4352, per=4.35%, avg=2736.80, stdev=384.44, samples=20 00:47:06.147 iops : min= 638, max= 1088, avg=684.00, stdev=96.17, samples=20 00:47:06.147 lat (usec) : 1000=0.03% 00:47:06.147 lat (msec) : 2=2.01%, 4=0.29%, 10=1.22%, 20=0.87%, 50=95.57% 00:47:06.147 cpu : usr=98.78%, sys=0.95%, ctx=17, majf=0, minf=50 00:47:06.147 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:47:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 issued rwts: total=6864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.147 filename0: (groupid=0, jobs=1): err= 0: pid=3798304: Thu Nov 28 13:17:34 2024 00:47:06.147 read: IOPS=656, BW=2628KiB/s (2691kB/s)(25.9MiB/10083msec) 00:47:06.147 slat (nsec): min=5703, max=81077, avg=14830.25, stdev=12870.25 00:47:06.147 clat (msec): min=11, max=112, avg=24.24, stdev= 4.41 00:47:06.147 lat (msec): min=11, max=112, avg=24.25, stdev= 4.41 00:47:06.147 clat percentiles (msec): 00:47:06.147 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.147 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:47:06.147 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.147 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.147 | 99.99th=[ 113] 00:47:06.147 bw ( KiB/s): min= 2427, max= 2688, per=4.20%, avg=2642.65, stdev=75.72, samples=20 00:47:06.147 iops : min= 606, max= 672, avg=660.60, stdev=19.03, samples=20 00:47:06.147 lat (msec) : 20=0.72%, 50=99.03%, 250=0.24% 00:47:06.147 cpu : usr=98.41%, sys=1.04%, ctx=86, majf=0, minf=44 00:47:06.147 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.147 filename0: (groupid=0, jobs=1): err= 0: pid=3798305: Thu Nov 28 13:17:34 2024 00:47:06.147 read: IOPS=662, BW=2651KiB/s (2715kB/s)(26.2MiB/10111msec) 00:47:06.147 slat (usec): min=5, max=104, avg=11.56, stdev= 7.99 00:47:06.147 clat (msec): min=6, max=112, avg=24.05, stdev= 4.64 00:47:06.147 lat (msec): min=6, max=112, avg=24.06, stdev= 4.63 00:47:06.147 clat percentiles (msec): 00:47:06.147 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.147 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:47:06.147 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.147 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.147 | 99.99th=[ 113] 00:47:06.147 bw ( KiB/s): min= 2560, max= 3072, per=4.25%, avg=2673.20, stdev=103.80, samples=20 00:47:06.147 iops : min= 640, max= 768, avg=668.20, stdev=25.97, samples=20 00:47:06.147 lat (msec) : 10=1.22%, 20=0.69%, 50=97.85%, 100=0.03%, 250=0.21% 00:47:06.147 cpu : usr=98.92%, sys=0.78%, ctx=58, majf=0, minf=48 00:47:06.147 IO depths : 1=1.7%, 2=4.8%, 4=21.8%, 8=60.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:47:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 issued rwts: total=6702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.147 filename0: (groupid=0, jobs=1): err= 0: pid=3798306: Thu Nov 28 13:17:34 2024 00:47:06.147 read: IOPS=653, BW=2614KiB/s (2677kB/s)(25.7MiB/10061msec) 00:47:06.147 slat (nsec): min=5724, max=60291, avg=11968.09, stdev=7843.63 00:47:06.147 clat (msec): min=18, max=113, avg=24.31, stdev= 3.50 00:47:06.147 lat (msec): min=18, max=113, avg=24.32, stdev= 3.50 00:47:06.147 clat percentiles (msec): 00:47:06.147 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.147 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:47:06.147 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.147 | 99.00th=[ 27], 99.50th=[ 31], 99.90th=[ 88], 99.95th=[ 88], 00:47:06.147 | 99.99th=[ 113] 00:47:06.147 bw ( KiB/s): min= 2299, max= 2688, per=4.17%, avg=2621.95, stdev=105.01, samples=20 00:47:06.147 iops : min= 574, max= 672, avg=655.30, stdev=26.31, samples=20 00:47:06.147 lat (msec) : 20=0.36%, 50=99.39%, 100=0.21%, 250=0.03% 00:47:06.147 cpu : usr=98.45%, sys=0.96%, ctx=157, majf=0, minf=40 00:47:06.147 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:47:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.147 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.147 filename0: (groupid=0, jobs=1): err= 0: pid=3798308: Thu Nov 28 13:17:34 2024 00:47:06.147 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.7MiB/10059msec) 00:47:06.147 slat (usec): min=5, max=105, avg=31.50, stdev=16.43 00:47:06.147 clat (msec): min=22, max=113, avg=24.17, stdev= 4.53 00:47:06.147 lat (msec): min=22, max=113, avg=24.20, stdev= 4.53 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.148 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.148 | 99.99th=[ 113] 00:47:06.148 bw ( KiB/s): min= 2299, max= 2688, per=4.17%, avg=2622.15, stdev=105.19, samples=20 00:47:06.148 iops : min= 574, max= 672, avg=655.35, stdev=26.33, samples=20 00:47:06.148 lat (msec) : 50=99.76%, 250=0.24% 00:47:06.148 cpu : usr=98.74%, sys=0.86%, ctx=114, majf=0, minf=27 00:47:06.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename0: (groupid=0, jobs=1): err= 0: pid=3798309: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.8MiB/10076msec) 00:47:06.148 slat (usec): min=6, max=111, avg=35.65, stdev=18.76 00:47:06.148 clat (msec): min=16, max=112, avg=24.06, stdev= 4.37 00:47:06.148 lat (msec): min=16, max=112, avg=24.10, stdev= 4.37 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.148 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.148 | 99.99th=[ 113] 00:47:06.148 bw ( KiB/s): min= 2427, max= 2688, per=4.19%, avg=2635.95, stdev=76.88, samples=20 00:47:06.148 iops : min= 606, max= 672, avg=658.90, stdev=19.30, samples=20 00:47:06.148 lat (msec) : 20=0.24%, 50=99.52%, 250=0.24% 00:47:06.148 cpu : usr=98.33%, sys=1.06%, ctx=173, majf=0, minf=26 00:47:06.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename0: (groupid=0, jobs=1): err= 0: pid=3798310: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.7MiB/10058msec) 00:47:06.148 slat (usec): min=5, max=109, avg=30.70, stdev=16.85 00:47:06.148 clat (msec): min=17, max=113, avg=24.19, stdev= 4.52 00:47:06.148 lat (msec): min=17, max=113, avg=24.22, stdev= 4.52 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.148 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 26], 99.50th=[ 35], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.148 | 99.99th=[ 113] 00:47:06.148 bw ( KiB/s): min= 2299, max= 2688, per=4.17%, avg=2622.15, stdev=105.19, samples=20 00:47:06.148 iops : min= 574, max= 672, avg=655.35, stdev=26.33, samples=20 00:47:06.148 lat (msec) : 20=0.09%, 50=99.67%, 250=0.24% 00:47:06.148 cpu : usr=98.97%, sys=0.74%, ctx=54, majf=0, minf=36 00:47:06.148 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename0: (groupid=0, jobs=1): err= 0: pid=3798311: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=662, BW=2649KiB/s (2713kB/s)(26.1MiB/10075msec) 00:47:06.148 slat (nsec): min=5690, max=61478, avg=8385.78, stdev=4397.86 00:47:06.148 clat (usec): min=8311, max=86947, avg=24086.84, stdev=3483.96 00:47:06.148 lat (usec): min=8330, max=86954, avg=24095.22, stdev=3483.40 00:47:06.148 clat percentiles (usec): 00:47:06.148 | 1.00th=[15926], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:47:06.148 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:47:06.148 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:47:06.148 | 99.00th=[24773], 99.50th=[29754], 99.90th=[86508], 99.95th=[86508], 00:47:06.148 | 99.99th=[86508] 00:47:06.148 bw ( KiB/s): min= 2554, max= 2949, per=4.23%, avg=2661.45, stdev=90.03, samples=20 00:47:06.148 iops : min= 638, max= 737, avg=665.25, stdev=22.48, samples=20 00:47:06.148 lat (msec) : 10=0.69%, 20=1.29%, 50=97.78%, 100=0.24% 00:47:06.148 cpu : usr=98.79%, sys=0.83%, ctx=75, majf=0, minf=57 00:47:06.148 IO depths : 1=5.3%, 2=11.5%, 4=24.9%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename1: (groupid=0, jobs=1): err= 0: pid=3798312: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=663, BW=2652KiB/s (2716kB/s)(26.2MiB/10111msec) 00:47:06.148 slat (usec): min=5, max=104, avg=10.01, stdev= 7.09 00:47:06.148 clat (msec): min=7, max=112, avg=24.05, stdev= 4.75 00:47:06.148 lat (msec): min=7, max=112, avg=24.06, stdev= 4.75 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:47:06.148 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 112], 99.95th=[ 112], 00:47:06.148 | 99.99th=[ 112] 00:47:06.148 bw ( KiB/s): min= 2554, max= 3072, per=4.25%, avg=2674.00, stdev=117.30, samples=20 00:47:06.148 iops : min= 638, max= 768, avg=668.40, stdev=29.38, samples=20 00:47:06.148 lat (msec) : 10=1.33%, 20=0.58%, 50=97.85%, 250=0.24% 00:47:06.148 cpu : usr=99.01%, sys=0.71%, ctx=28, majf=0, minf=41 00:47:06.148 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename1: (groupid=0, jobs=1): err= 0: pid=3798313: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=650, BW=2602KiB/s (2665kB/s)(25.7MiB/10102msec) 00:47:06.148 slat (usec): min=5, max=106, avg=30.49, stdev=16.49 00:47:06.148 clat (msec): min=15, max=123, avg=24.25, stdev= 4.69 00:47:06.148 lat (msec): min=15, max=123, avg=24.28, stdev= 4.69 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.148 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 26], 99.50th=[ 44], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.148 | 99.99th=[ 124] 00:47:06.148 bw ( KiB/s): min= 2315, max= 2688, per=4.17%, avg=2620.95, stdev=98.81, samples=20 00:47:06.148 iops : min= 578, max= 672, avg=655.10, stdev=24.80, samples=20 00:47:06.148 lat (msec) : 20=0.21%, 50=99.54%, 250=0.24% 00:47:06.148 cpu : usr=98.78%, sys=0.92%, ctx=66, majf=0, minf=38 00:47:06.148 IO depths : 1=1.7%, 2=7.9%, 4=24.7%, 8=54.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename1: (groupid=0, jobs=1): err= 0: pid=3798314: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=653, BW=2615KiB/s (2677kB/s)(25.7MiB/10060msec) 00:47:06.148 slat (usec): min=5, max=108, avg=31.41, stdev=16.15 00:47:06.148 clat (msec): min=19, max=115, avg=24.18, stdev= 4.53 00:47:06.148 lat (msec): min=19, max=115, avg=24.21, stdev= 4.53 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.148 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 25], 99.50th=[ 29], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.148 | 99.99th=[ 116] 00:47:06.148 bw ( KiB/s): min= 2299, max= 2693, per=4.17%, avg=2622.80, stdev=106.11, samples=20 00:47:06.148 iops : min= 574, max= 673, avg=655.55, stdev=26.58, samples=20 00:47:06.148 lat (msec) : 20=0.03%, 50=99.73%, 250=0.24% 00:47:06.148 cpu : usr=99.00%, sys=0.74%, ctx=7, majf=0, minf=48 00:47:06.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:06.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.148 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.148 filename1: (groupid=0, jobs=1): err= 0: pid=3798315: Thu Nov 28 13:17:34 2024 00:47:06.148 read: IOPS=656, BW=2628KiB/s (2691kB/s)(25.9MiB/10083msec) 00:47:06.148 slat (nsec): min=5731, max=87919, avg=19874.60, stdev=13942.93 00:47:06.148 clat (msec): min=11, max=112, avg=24.20, stdev= 4.43 00:47:06.148 lat (msec): min=11, max=112, avg=24.22, stdev= 4.43 00:47:06.148 clat percentiles (msec): 00:47:06.148 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.148 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.148 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.148 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.148 | 99.99th=[ 113] 00:47:06.148 bw ( KiB/s): min= 2427, max= 2688, per=4.20%, avg=2642.65, stdev=75.72, samples=20 00:47:06.149 iops : min= 606, max= 672, avg=660.60, stdev=19.03, samples=20 00:47:06.149 lat (msec) : 20=0.88%, 50=98.88%, 250=0.24% 00:47:06.149 cpu : usr=98.85%, sys=0.79%, ctx=81, majf=0, minf=39 00:47:06.149 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename1: (groupid=0, jobs=1): err= 0: pid=3798316: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=660, BW=2641KiB/s (2704kB/s)(26.1MiB/10106msec) 00:47:06.149 slat (nsec): min=5724, max=84506, avg=25012.94, stdev=14310.46 00:47:06.149 clat (msec): min=8, max=112, avg=24.03, stdev= 4.59 00:47:06.149 lat (msec): min=8, max=112, avg=24.06, stdev= 4.59 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.149 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.149 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 112], 99.95th=[ 113], 00:47:06.149 | 99.99th=[ 113] 00:47:06.149 bw ( KiB/s): min= 2554, max= 2949, per=4.23%, avg=2661.45, stdev=90.03, samples=20 00:47:06.149 iops : min= 638, max= 737, avg=665.25, stdev=22.48, samples=20 00:47:06.149 lat (msec) : 10=0.69%, 20=0.91%, 50=98.16%, 250=0.24% 00:47:06.149 cpu : usr=98.74%, sys=0.90%, ctx=53, majf=0, minf=33 00:47:06.149 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename1: (groupid=0, jobs=1): err= 0: pid=3798318: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.7MiB/10057msec) 00:47:06.149 slat (nsec): min=5686, max=61191, avg=13960.29, stdev=8793.63 00:47:06.149 clat (msec): min=18, max=110, avg=24.27, stdev= 3.45 00:47:06.149 lat (msec): min=18, max=110, avg=24.29, stdev= 3.45 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.149 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.149 | 99.00th=[ 26], 99.50th=[ 30], 99.90th=[ 88], 99.95th=[ 88], 00:47:06.149 | 99.99th=[ 111] 00:47:06.149 bw ( KiB/s): min= 2299, max= 2693, per=4.17%, avg=2622.80, stdev=106.49, samples=20 00:47:06.149 iops : min= 574, max= 673, avg=655.55, stdev=26.71, samples=20 00:47:06.149 lat (msec) : 20=0.21%, 50=99.54%, 100=0.21%, 250=0.03% 00:47:06.149 cpu : usr=99.14%, sys=0.58%, ctx=30, majf=0, minf=35 00:47:06.149 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename1: (groupid=0, jobs=1): err= 0: pid=3798319: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=654, BW=2618KiB/s (2681kB/s)(25.8MiB/10072msec) 00:47:06.149 slat (nsec): min=5735, max=89889, avg=27296.59, stdev=15419.56 00:47:06.149 clat (msec): min=22, max=112, avg=24.23, stdev= 4.40 00:47:06.149 lat (msec): min=22, max=112, avg=24.26, stdev= 4.40 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.149 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.149 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.149 | 99.99th=[ 113] 00:47:06.149 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2629.20, stdev=78.08, samples=20 00:47:06.149 iops : min= 608, max= 672, avg=657.20, stdev=19.58, samples=20 00:47:06.149 lat (msec) : 50=99.76%, 250=0.24% 00:47:06.149 cpu : usr=98.79%, sys=0.85%, ctx=63, majf=0, minf=30 00:47:06.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename1: (groupid=0, jobs=1): err= 0: pid=3798320: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=647, BW=2592KiB/s (2654kB/s)(25.5MiB/10071msec) 00:47:06.149 slat (usec): min=5, max=122, avg=18.07, stdev=17.10 00:47:06.149 clat (msec): min=6, max=112, avg=24.59, stdev= 5.43 00:47:06.149 lat (msec): min=6, max=112, avg=24.61, stdev= 5.43 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:47:06.149 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 32], 00:47:06.149 | 99.00th=[ 40], 99.50th=[ 44], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.149 | 99.99th=[ 113] 00:47:06.149 bw ( KiB/s): min= 2224, max= 2746, per=4.14%, avg=2602.80, stdev=115.72, samples=20 00:47:06.149 iops : min= 556, max= 686, avg=650.60, stdev=28.89, samples=20 00:47:06.149 lat (msec) : 10=0.44%, 20=5.10%, 50=94.21%, 100=0.05%, 250=0.20% 00:47:06.149 cpu : usr=99.01%, sys=0.72%, ctx=14, majf=0, minf=75 00:47:06.149 IO depths : 1=0.7%, 2=1.4%, 4=4.5%, 8=77.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename2: (groupid=0, jobs=1): err= 0: pid=3798321: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=656, BW=2627KiB/s (2690kB/s)(25.9MiB/10087msec) 00:47:06.149 slat (usec): min=5, max=115, avg=16.85, stdev=18.70 00:47:06.149 clat (msec): min=12, max=112, avg=24.23, stdev= 4.41 00:47:06.149 lat (msec): min=12, max=112, avg=24.25, stdev= 4.41 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:47:06.149 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.149 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.149 | 99.99th=[ 113] 00:47:06.149 bw ( KiB/s): min= 2560, max= 2688, per=4.20%, avg=2642.60, stdev=62.21, samples=20 00:47:06.149 iops : min= 640, max= 672, avg=660.60, stdev=15.52, samples=20 00:47:06.149 lat (msec) : 20=0.48%, 50=99.28%, 250=0.24% 00:47:06.149 cpu : usr=99.12%, sys=0.60%, ctx=14, majf=0, minf=39 00:47:06.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename2: (groupid=0, jobs=1): err= 0: pid=3798322: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=654, BW=2617KiB/s (2680kB/s)(25.8MiB/10075msec) 00:47:06.149 slat (usec): min=5, max=114, avg=30.87, stdev=17.98 00:47:06.149 clat (msec): min=10, max=114, avg=24.20, stdev= 4.43 00:47:06.149 lat (msec): min=10, max=114, avg=24.23, stdev= 4.43 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.149 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.149 | 99.00th=[ 25], 99.50th=[ 32], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.149 | 99.99th=[ 115] 00:47:06.149 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2629.20, stdev=78.08, samples=20 00:47:06.149 iops : min= 608, max= 672, avg=657.20, stdev=19.58, samples=20 00:47:06.149 lat (msec) : 20=0.06%, 50=99.70%, 250=0.24% 00:47:06.149 cpu : usr=98.64%, sys=0.93%, ctx=62, majf=0, minf=30 00:47:06.149 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename2: (groupid=0, jobs=1): err= 0: pid=3798323: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.7MiB/10062msec) 00:47:06.149 slat (usec): min=5, max=108, avg=32.27, stdev=17.59 00:47:06.149 clat (msec): min=15, max=115, avg=24.16, stdev= 4.57 00:47:06.149 lat (msec): min=15, max=115, avg=24.19, stdev= 4.57 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.149 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.149 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.149 | 99.00th=[ 26], 99.50th=[ 32], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.149 | 99.99th=[ 116] 00:47:06.149 bw ( KiB/s): min= 2299, max= 2688, per=4.17%, avg=2622.75, stdev=104.10, samples=20 00:47:06.149 iops : min= 574, max= 672, avg=655.50, stdev=26.06, samples=20 00:47:06.149 lat (msec) : 20=0.27%, 50=99.48%, 250=0.24% 00:47:06.149 cpu : usr=98.61%, sys=0.95%, ctx=103, majf=0, minf=32 00:47:06.149 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:47:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.149 issued rwts: total=6578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.149 filename2: (groupid=0, jobs=1): err= 0: pid=3798324: Thu Nov 28 13:17:34 2024 00:47:06.149 read: IOPS=654, BW=2618KiB/s (2681kB/s)(25.8MiB/10070msec) 00:47:06.149 slat (usec): min=5, max=110, avg=31.99, stdev=18.56 00:47:06.149 clat (msec): min=22, max=114, avg=24.12, stdev= 4.41 00:47:06.149 lat (msec): min=22, max=114, avg=24.15, stdev= 4.41 00:47:06.149 clat percentiles (msec): 00:47:06.149 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.150 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.150 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.150 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.150 | 99.99th=[ 115] 00:47:06.150 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2629.45, stdev=76.28, samples=20 00:47:06.150 iops : min= 608, max= 672, avg=657.25, stdev=19.01, samples=20 00:47:06.150 lat (msec) : 50=99.76%, 250=0.24% 00:47:06.150 cpu : usr=98.82%, sys=0.74%, ctx=70, majf=0, minf=41 00:47:06.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.150 filename2: (groupid=0, jobs=1): err= 0: pid=3798325: Thu Nov 28 13:17:34 2024 00:47:06.150 read: IOPS=663, BW=2652KiB/s (2716kB/s)(26.2MiB/10110msec) 00:47:06.150 slat (nsec): min=5740, max=56852, avg=11946.59, stdev=7288.12 00:47:06.150 clat (msec): min=6, max=112, avg=24.02, stdev= 4.76 00:47:06.150 lat (msec): min=6, max=112, avg=24.03, stdev= 4.76 00:47:06.150 clat percentiles (msec): 00:47:06.150 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.150 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.150 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.150 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.150 | 99.99th=[ 113] 00:47:06.150 bw ( KiB/s): min= 2554, max= 3078, per=4.25%, avg=2674.30, stdev=110.49, samples=20 00:47:06.150 iops : min= 638, max= 769, avg=668.45, stdev=27.55, samples=20 00:47:06.150 lat (msec) : 10=0.95%, 20=1.07%, 50=97.73%, 250=0.24% 00:47:06.150 cpu : usr=98.81%, sys=0.92%, ctx=14, majf=0, minf=40 00:47:06.150 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:47:06.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.150 filename2: (groupid=0, jobs=1): err= 0: pid=3798326: Thu Nov 28 13:17:34 2024 00:47:06.150 read: IOPS=654, BW=2618KiB/s (2681kB/s)(25.8MiB/10071msec) 00:47:06.150 slat (usec): min=5, max=106, avg=23.90, stdev=19.89 00:47:06.150 clat (msec): min=22, max=113, avg=24.20, stdev= 4.38 00:47:06.150 lat (msec): min=22, max=113, avg=24.23, stdev= 4.38 00:47:06.150 clat percentiles (msec): 00:47:06.150 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.150 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.150 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.150 | 99.00th=[ 25], 99.50th=[ 30], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.150 | 99.99th=[ 114] 00:47:06.150 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2629.45, stdev=76.81, samples=20 00:47:06.150 iops : min= 608, max= 672, avg=657.25, stdev=19.19, samples=20 00:47:06.150 lat (msec) : 50=99.76%, 250=0.24% 00:47:06.150 cpu : usr=99.11%, sys=0.63%, ctx=15, majf=0, minf=31 00:47:06.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:06.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.150 filename2: (groupid=0, jobs=1): err= 0: pid=3798327: Thu Nov 28 13:17:34 2024 00:47:06.150 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.7MiB/10057msec) 00:47:06.150 slat (nsec): min=5725, max=74108, avg=13550.08, stdev=8961.78 00:47:06.150 clat (msec): min=18, max=112, avg=24.35, stdev= 4.51 00:47:06.150 lat (msec): min=18, max=112, avg=24.37, stdev= 4.51 00:47:06.150 clat percentiles (msec): 00:47:06.150 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.150 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:47:06.150 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.150 | 99.00th=[ 29], 99.50th=[ 31], 99.90th=[ 113], 99.95th=[ 113], 00:47:06.150 | 99.99th=[ 113] 00:47:06.150 bw ( KiB/s): min= 2299, max= 2693, per=4.17%, avg=2622.80, stdev=105.60, samples=20 00:47:06.150 iops : min= 574, max= 673, avg=655.55, stdev=26.49, samples=20 00:47:06.150 lat (msec) : 20=0.30%, 50=99.45%, 250=0.24% 00:47:06.150 cpu : usr=98.90%, sys=0.81%, ctx=84, majf=0, minf=34 00:47:06.150 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:47:06.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.150 filename2: (groupid=0, jobs=1): err= 0: pid=3798329: Thu Nov 28 13:17:34 2024 00:47:06.150 read: IOPS=661, BW=2646KiB/s (2709kB/s)(26.1MiB/10111msec) 00:47:06.150 slat (usec): min=5, max=112, avg=32.14, stdev=17.82 00:47:06.150 clat (msec): min=6, max=114, avg=23.91, stdev= 4.71 00:47:06.150 lat (msec): min=6, max=114, avg=23.94, stdev= 4.71 00:47:06.150 clat percentiles (msec): 00:47:06.150 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:47:06.150 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:47:06.150 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:47:06.150 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 112], 99.95th=[ 112], 00:47:06.150 | 99.99th=[ 115] 00:47:06.150 bw ( KiB/s): min= 2554, max= 3072, per=4.24%, avg=2667.60, stdev=112.18, samples=20 00:47:06.150 iops : min= 638, max= 768, avg=666.80, stdev=28.06, samples=20 00:47:06.150 lat (msec) : 10=1.05%, 20=0.75%, 50=97.97%, 250=0.24% 00:47:06.150 cpu : usr=98.58%, sys=0.96%, ctx=127, majf=0, minf=46 00:47:06.150 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:06.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.150 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:06.150 00:47:06.150 Run status group 0 (all jobs): 00:47:06.150 READ: bw=61.4MiB/s (64.4MB/s), 2592KiB/s-2741KiB/s (2654kB/s-2806kB/s), io=621MiB (651MB), run=10018-10111msec 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.150 13:17:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 bdev_null0 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 [2024-11-28 13:17:35.034526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 bdev_null1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:06.151 { 00:47:06.151 "params": { 00:47:06.151 "name": "Nvme$subsystem", 00:47:06.151 "trtype": "$TEST_TRANSPORT", 00:47:06.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:06.151 "adrfam": "ipv4", 00:47:06.151 "trsvcid": "$NVMF_PORT", 00:47:06.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:06.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:06.151 "hdgst": ${hdgst:-false}, 00:47:06.151 "ddgst": ${ddgst:-false} 00:47:06.151 }, 00:47:06.151 "method": "bdev_nvme_attach_controller" 00:47:06.151 } 00:47:06.151 EOF 00:47:06.151 )") 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:06.151 { 00:47:06.151 "params": { 00:47:06.151 "name": "Nvme$subsystem", 00:47:06.151 "trtype": "$TEST_TRANSPORT", 00:47:06.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:06.151 "adrfam": "ipv4", 00:47:06.151 "trsvcid": "$NVMF_PORT", 00:47:06.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:06.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:06.151 "hdgst": ${hdgst:-false}, 00:47:06.151 "ddgst": ${ddgst:-false} 00:47:06.151 }, 00:47:06.151 "method": "bdev_nvme_attach_controller" 00:47:06.151 } 00:47:06.151 EOF 00:47:06.151 )") 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:06.151 "params": { 00:47:06.151 "name": "Nvme0", 00:47:06.151 "trtype": "tcp", 00:47:06.151 "traddr": "10.0.0.2", 00:47:06.151 "adrfam": "ipv4", 00:47:06.151 "trsvcid": "4420", 00:47:06.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:06.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:06.151 "hdgst": false, 00:47:06.151 "ddgst": false 00:47:06.151 }, 00:47:06.151 "method": "bdev_nvme_attach_controller" 00:47:06.151 },{ 00:47:06.151 "params": { 00:47:06.151 "name": "Nvme1", 00:47:06.151 "trtype": "tcp", 00:47:06.151 "traddr": "10.0.0.2", 00:47:06.151 "adrfam": "ipv4", 00:47:06.151 "trsvcid": "4420", 00:47:06.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:06.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:06.151 "hdgst": false, 00:47:06.151 "ddgst": false 00:47:06.151 }, 00:47:06.151 "method": "bdev_nvme_attach_controller" 00:47:06.151 }' 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:06.151 13:17:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:06.151 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:06.151 ... 00:47:06.151 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:06.151 ... 00:47:06.151 fio-3.35 00:47:06.151 Starting 4 threads 00:47:11.445 00:47:11.445 filename0: (groupid=0, jobs=1): err= 0: pid=3800696: Thu Nov 28 13:17:41 2024 00:47:11.445 read: IOPS=2872, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:47:11.445 slat (nsec): min=5536, max=33344, avg=6384.39, stdev=1877.34 00:47:11.445 clat (usec): min=971, max=45848, avg=2768.48, stdev=1039.58 00:47:11.445 lat (usec): min=977, max=45875, avg=2774.86, stdev=1039.72 00:47:11.445 clat percentiles (usec): 00:47:11.445 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:47:11.445 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:47:11.445 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 2999], 00:47:11.445 | 99.00th=[ 3752], 99.50th=[ 4080], 99.90th=[ 4424], 99.95th=[45876], 00:47:11.445 | 99.99th=[45876] 00:47:11.445 bw ( KiB/s): min=21008, max=23328, per=24.60%, avg=22954.67, stdev=737.22, samples=9 00:47:11.445 iops : min= 2626, max= 2916, avg=2869.33, stdev=92.15, samples=9 00:47:11.445 lat (usec) : 1000=0.02% 00:47:11.445 lat (msec) : 2=0.25%, 4=99.15%, 10=0.52%, 50=0.06% 00:47:11.445 cpu : usr=94.40%, sys=4.12%, ctx=158, majf=0, minf=106 00:47:11.445 IO depths : 1=0.1%, 2=0.1%, 4=69.7%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:11.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.445 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.445 issued rwts: total=14367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:11.445 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:11.445 filename0: (groupid=0, jobs=1): err= 0: pid=3800697: Thu Nov 28 13:17:41 2024 00:47:11.445 read: IOPS=2908, BW=22.7MiB/s (23.8MB/s)(114MiB/5003msec) 00:47:11.445 slat (nsec): min=5522, max=61053, avg=6272.89, stdev=2158.68 00:47:11.445 clat (usec): min=933, max=4721, avg=2734.38, stdev=234.18 00:47:11.445 lat (usec): min=951, max=4727, avg=2740.66, stdev=233.88 00:47:11.445 clat percentiles (usec): 00:47:11.445 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:47:11.445 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:47:11.445 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2999], 00:47:11.445 | 99.00th=[ 3752], 99.50th=[ 4015], 99.90th=[ 4359], 99.95th=[ 4490], 00:47:11.445 | 99.99th=[ 4686] 00:47:11.445 bw ( KiB/s): min=23120, max=23984, per=24.99%, avg=23317.33, stdev=271.18, samples=9 00:47:11.445 iops : min= 2890, max= 2998, avg=2914.67, stdev=33.90, samples=9 00:47:11.445 lat (usec) : 1000=0.01% 00:47:11.445 lat (msec) : 2=0.71%, 4=98.74%, 10=0.54% 00:47:11.445 cpu : usr=96.26%, sys=3.50%, ctx=5, majf=0, minf=73 00:47:11.445 IO depths : 1=0.1%, 2=0.1%, 4=70.0%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:11.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.445 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.445 issued rwts: total=14549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:11.445 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:11.445 filename1: (groupid=0, jobs=1): err= 0: pid=3800698: Thu Nov 28 13:17:41 2024 00:47:11.445 read: IOPS=2906, BW=22.7MiB/s (23.8MB/s)(114MiB/5001msec) 00:47:11.445 slat (nsec): min=5530, max=61859, avg=8217.50, stdev=1968.80 00:47:11.445 clat (usec): min=1087, max=7224, avg=2731.81, stdev=230.14 00:47:11.445 lat (usec): min=1093, max=7255, avg=2740.03, stdev=230.02 00:47:11.445 clat percentiles (usec): 00:47:11.445 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:47:11.445 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:47:11.445 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 2999], 00:47:11.445 | 99.00th=[ 3556], 99.50th=[ 3916], 99.90th=[ 4621], 99.95th=[ 5407], 00:47:11.445 | 99.99th=[ 7177] 00:47:11.446 bw ( KiB/s): min=23024, max=23567, per=24.95%, avg=23283.44, stdev=167.36, samples=9 00:47:11.446 iops : min= 2878, max= 2945, avg=2910.33, stdev=20.74, samples=9 00:47:11.446 lat (msec) : 2=0.50%, 4=99.08%, 10=0.43% 00:47:11.446 cpu : usr=96.64%, sys=3.10%, ctx=6, majf=0, minf=62 00:47:11.446 IO depths : 1=0.1%, 2=0.1%, 4=70.8%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:11.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.446 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.446 issued rwts: total=14533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:11.446 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:11.446 filename1: (groupid=0, jobs=1): err= 0: pid=3800699: Thu Nov 28 13:17:41 2024 00:47:11.446 read: IOPS=2980, BW=23.3MiB/s (24.4MB/s)(116MiB/5001msec) 00:47:11.446 slat (nsec): min=5526, max=61468, avg=7957.16, stdev=1719.55 00:47:11.446 clat (usec): min=1221, max=4445, avg=2662.70, stdev=324.04 00:47:11.446 lat (usec): min=1227, max=4453, avg=2670.65, stdev=324.01 00:47:11.446 clat percentiles (usec): 00:47:11.446 | 1.00th=[ 1909], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2442], 00:47:11.446 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:47:11.446 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2966], 95.00th=[ 3261], 00:47:11.446 | 99.00th=[ 3654], 99.50th=[ 3752], 99.90th=[ 4047], 99.95th=[ 4080], 00:47:11.446 | 99.99th=[ 4424] 00:47:11.446 bw ( KiB/s): min=23136, max=24432, per=25.50%, avg=23795.56, stdev=449.62, samples=9 00:47:11.446 iops : min= 2892, max= 3054, avg=2974.44, stdev=56.20, samples=9 00:47:11.446 lat (msec) : 2=1.61%, 4=98.22%, 10=0.17% 00:47:11.446 cpu : usr=97.04%, sys=2.72%, ctx=6, majf=0, minf=96 00:47:11.446 IO depths : 1=0.1%, 2=0.7%, 4=70.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:11.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.446 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:11.446 issued rwts: total=14907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:11.446 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:11.446 00:47:11.446 Run status group 0 (all jobs): 00:47:11.446 READ: bw=91.1MiB/s (95.6MB/s), 22.4MiB/s-23.3MiB/s (23.5MB/s-24.4MB/s), io=456MiB (478MB), run=5001-5003msec 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 00:47:11.446 real 0m24.779s 00:47:11.446 user 5m21.996s 00:47:11.446 sys 0m4.520s 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 ************************************ 00:47:11.446 END TEST fio_dif_rand_params 00:47:11.446 ************************************ 00:47:11.446 13:17:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:47:11.446 13:17:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:11.446 13:17:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 ************************************ 00:47:11.446 START TEST fio_dif_digest 00:47:11.446 ************************************ 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 bdev_null0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.446 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:11.708 [2024-11-28 13:17:41.570355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:11.708 { 00:47:11.708 "params": { 00:47:11.708 "name": "Nvme$subsystem", 00:47:11.708 "trtype": "$TEST_TRANSPORT", 00:47:11.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:11.708 "adrfam": "ipv4", 00:47:11.708 "trsvcid": "$NVMF_PORT", 00:47:11.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:11.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:11.708 "hdgst": ${hdgst:-false}, 00:47:11.708 "ddgst": ${ddgst:-false} 00:47:11.708 }, 00:47:11.708 "method": "bdev_nvme_attach_controller" 00:47:11.708 } 00:47:11.708 EOF 00:47:11.708 )") 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:11.708 "params": { 00:47:11.708 "name": "Nvme0", 00:47:11.708 "trtype": "tcp", 00:47:11.708 "traddr": "10.0.0.2", 00:47:11.708 "adrfam": "ipv4", 00:47:11.708 "trsvcid": "4420", 00:47:11.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:11.708 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:11.708 "hdgst": true, 00:47:11.708 "ddgst": true 00:47:11.708 }, 00:47:11.708 "method": "bdev_nvme_attach_controller" 00:47:11.708 }' 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:11.708 13:17:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:11.969 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:47:11.969 ... 00:47:11.969 fio-3.35 00:47:11.969 Starting 3 threads 00:47:24.197 00:47:24.197 filename0: (groupid=0, jobs=1): err= 0: pid=3801918: Thu Nov 28 13:17:52 2024 00:47:24.197 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(316MiB/10024msec) 00:47:24.197 slat (nsec): min=5971, max=33415, avg=7064.89, stdev=1322.53 00:47:24.197 clat (usec): min=7294, max=93915, avg=11872.34, stdev=4233.81 00:47:24.197 lat (usec): min=7301, max=93922, avg=11879.40, stdev=4233.82 00:47:24.197 clat percentiles (usec): 00:47:24.197 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:47:24.197 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:47:24.197 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:47:24.197 | 99.00th=[15139], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:47:24.197 | 99.99th=[93848] 00:47:24.197 bw ( KiB/s): min=27904, max=34048, per=28.47%, avg=32358.40, stdev=1613.54, samples=20 00:47:24.197 iops : min= 218, max= 266, avg=252.80, stdev=12.61, samples=20 00:47:24.197 lat (msec) : 10=4.74%, 20=94.35%, 50=0.04%, 100=0.87% 00:47:24.197 cpu : usr=93.22%, sys=5.65%, ctx=707, majf=0, minf=131 00:47:24.197 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:24.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:24.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:24.197 issued rwts: total=2531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:24.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:24.197 filename0: (groupid=0, jobs=1): err= 0: pid=3801919: Thu Nov 28 13:17:52 2024 00:47:24.197 read: IOPS=337, BW=42.2MiB/s (44.3MB/s)(424MiB/10045msec) 00:47:24.197 slat (nsec): min=5904, max=32491, avg=6591.84, stdev=998.70 00:47:24.197 clat (usec): min=5737, max=46287, avg=8858.29, stdev=1189.86 00:47:24.197 lat (usec): min=5744, max=46293, avg=8864.88, stdev=1189.83 00:47:24.197 clat percentiles (usec): 00:47:24.197 | 1.00th=[ 6325], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8291], 00:47:24.197 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:47:24.197 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:47:24.197 | 99.00th=[10421], 99.50th=[10552], 99.90th=[11207], 99.95th=[44303], 00:47:24.197 | 99.99th=[46400] 00:47:24.197 bw ( KiB/s): min=42240, max=44800, per=38.21%, avg=43417.60, stdev=735.42, samples=20 00:47:24.197 iops : min= 330, max= 350, avg=339.20, stdev= 5.75, samples=20 00:47:24.197 lat (msec) : 10=95.52%, 20=4.42%, 50=0.06% 00:47:24.197 cpu : usr=96.22%, sys=3.55%, ctx=21, majf=0, minf=107 00:47:24.197 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:24.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:24.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:24.197 issued rwts: total=3394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:24.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:24.197 filename0: (groupid=0, jobs=1): err= 0: pid=3801920: Thu Nov 28 13:17:52 2024 00:47:24.197 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(374MiB/10004msec) 00:47:24.197 slat (nsec): min=5896, max=31691, avg=6722.12, stdev=1042.13 00:47:24.197 clat (usec): min=4797, max=52392, avg=10018.96, stdev=1591.36 00:47:24.197 lat (usec): min=4803, max=52398, avg=10025.68, stdev=1591.38 00:47:24.197 clat percentiles (usec): 00:47:24.197 | 1.00th=[ 7177], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:47:24.197 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:47:24.197 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:47:24.197 | 99.00th=[11994], 99.50th=[12256], 99.90th=[51119], 99.95th=[52167], 00:47:24.197 | 99.99th=[52167] 00:47:24.197 bw ( KiB/s): min=36352, max=41216, per=33.73%, avg=38332.63, stdev=1125.11, samples=19 00:47:24.198 iops : min= 284, max= 322, avg=299.47, stdev= 8.79, samples=19 00:47:24.198 lat (msec) : 10=49.72%, 20=50.18%, 100=0.10% 00:47:24.198 cpu : usr=94.09%, sys=5.45%, ctx=410, majf=0, minf=165 00:47:24.198 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:24.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:24.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:24.198 issued rwts: total=2993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:24.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:24.198 00:47:24.198 Run status group 0 (all jobs): 00:47:24.198 READ: bw=111MiB/s (116MB/s), 31.6MiB/s-42.2MiB/s (33.1MB/s-44.3MB/s), io=1115MiB (1169MB), run=10004-10045msec 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:24.198 00:47:24.198 real 0m11.191s 00:47:24.198 user 0m45.084s 00:47:24.198 sys 0m1.783s 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:24.198 13:17:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:24.198 ************************************ 00:47:24.198 END TEST fio_dif_digest 00:47:24.198 ************************************ 00:47:24.198 13:17:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:47:24.198 13:17:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:24.198 rmmod nvme_tcp 00:47:24.198 rmmod nvme_fabrics 00:47:24.198 rmmod nvme_keyring 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3791741 ']' 00:47:24.198 13:17:52 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3791741 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3791741 ']' 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3791741 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791741 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791741' 00:47:24.198 killing process with pid 3791741 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3791741 00:47:24.198 13:17:52 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3791741 00:47:24.198 13:17:53 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:47:24.198 13:17:53 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:26.742 Waiting for block devices as requested 00:47:26.742 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:26.742 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:26.742 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:26.742 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:26.742 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:26.742 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:26.742 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:27.003 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:27.003 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:27.263 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:27.263 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:27.263 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:27.523 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:27.523 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:27.523 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:27.784 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:27.784 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:28.044 13:17:58 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:28.044 13:17:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:28.044 13:17:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:30.589 13:18:00 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:30.589 00:47:30.589 real 1m18.890s 00:47:30.589 user 8m5.414s 00:47:30.589 sys 0m21.990s 00:47:30.589 13:18:00 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:30.589 13:18:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:30.589 ************************************ 00:47:30.589 END TEST nvmf_dif 00:47:30.589 ************************************ 00:47:30.589 13:18:00 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:30.589 13:18:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:30.589 13:18:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:30.589 13:18:00 -- common/autotest_common.sh@10 -- # set +x 00:47:30.589 ************************************ 00:47:30.589 START TEST nvmf_abort_qd_sizes 00:47:30.589 ************************************ 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:30.589 * Looking for test storage... 00:47:30.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:30.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:30.589 --rc genhtml_branch_coverage=1 00:47:30.589 --rc genhtml_function_coverage=1 00:47:30.589 --rc genhtml_legend=1 00:47:30.589 --rc geninfo_all_blocks=1 00:47:30.589 --rc geninfo_unexecuted_blocks=1 00:47:30.589 00:47:30.589 ' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:30.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:30.589 --rc genhtml_branch_coverage=1 00:47:30.589 --rc genhtml_function_coverage=1 00:47:30.589 --rc genhtml_legend=1 00:47:30.589 --rc geninfo_all_blocks=1 00:47:30.589 --rc geninfo_unexecuted_blocks=1 00:47:30.589 00:47:30.589 ' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:30.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:30.589 --rc genhtml_branch_coverage=1 00:47:30.589 --rc genhtml_function_coverage=1 00:47:30.589 --rc genhtml_legend=1 00:47:30.589 --rc geninfo_all_blocks=1 00:47:30.589 --rc geninfo_unexecuted_blocks=1 00:47:30.589 00:47:30.589 ' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:30.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:30.589 --rc genhtml_branch_coverage=1 00:47:30.589 --rc genhtml_function_coverage=1 00:47:30.589 --rc genhtml_legend=1 00:47:30.589 --rc geninfo_all_blocks=1 00:47:30.589 --rc geninfo_unexecuted_blocks=1 00:47:30.589 00:47:30.589 ' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:30.589 13:18:00 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:30.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:47:30.590 13:18:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:47:37.256 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:47:37.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:47:37.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:47:37.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:37.256 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:37.518 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:37.780 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:37.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:37.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:47:37.780 00:47:37.780 --- 10.0.0.2 ping statistics --- 00:47:37.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:37.780 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:47:37.780 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:37.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:37.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:47:37.780 00:47:37.780 --- 10.0.0.1 ping statistics --- 00:47:37.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:37.780 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:47:37.780 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:37.780 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:47:37.780 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:47:37.780 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:41.083 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:41.083 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:41.084 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:41.084 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:41.344 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:41.344 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:41.344 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:41.344 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:41.344 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3811903 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3811903 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3811903 ']' 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:41.605 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:41.606 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:41.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:41.606 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:41.606 13:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:41.606 [2024-11-28 13:18:11.728212] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:47:41.606 [2024-11-28 13:18:11.728261] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:41.866 [2024-11-28 13:18:11.866937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:41.866 [2024-11-28 13:18:11.924062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:41.867 [2024-11-28 13:18:11.943739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:41.867 [2024-11-28 13:18:11.943767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:41.867 [2024-11-28 13:18:11.943775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:41.867 [2024-11-28 13:18:11.943782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:41.867 [2024-11-28 13:18:11.943787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:41.867 [2024-11-28 13:18:11.945527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:41.867 [2024-11-28 13:18:11.945674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:41.867 [2024-11-28 13:18:11.945788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:41.867 [2024-11-28 13:18:11.945789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:42.439 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:42.439 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:47:42.439 13:18:12 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:42.439 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:42.439 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:47:42.700 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:42.701 13:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:42.701 ************************************ 00:47:42.701 START TEST spdk_target_abort 00:47:42.701 ************************************ 00:47:42.701 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:47:42.701 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:42.701 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:47:42.701 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.701 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.962 spdk_targetn1 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.962 [2024-11-28 13:18:12.931927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.962 [2024-11-28 13:18:12.968177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.962 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:47:42.963 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.963 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:42.963 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.963 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:42.963 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:42.963 13:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:43.224 [2024-11-28 13:18:13.280377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:968 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:47:43.224 [2024-11-28 13:18:13.280409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:007b p:1 m:0 dnr:0 00:47:43.224 [2024-11-28 13:18:13.287665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:47:43.224 [2024-11-28 13:18:13.287686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0094 p:1 m:0 dnr:0 00:47:43.224 [2024-11-28 13:18:13.303700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1648 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:47:43.224 [2024-11-28 13:18:13.303721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:47:43.224 [2024-11-28 13:18:13.311682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1896 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:47:43.224 [2024-11-28 13:18:13.311702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:47:43.224 [2024-11-28 13:18:13.313173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1984 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:47:43.224 [2024-11-28 13:18:13.313189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:47:43.224 [2024-11-28 13:18:13.326748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2384 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:47:43.224 [2024-11-28 13:18:13.326769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:47:43.484 [2024-11-28 13:18:13.364730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3616 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:47:43.484 [2024-11-28 13:18:13.364752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:47:43.484 [2024-11-28 13:18:13.372740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3840 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:47:43.484 [2024-11-28 13:18:13.372758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e1 p:0 m:0 dnr:0 00:47:46.787 Initializing NVMe Controllers 00:47:46.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:46.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:46.787 Initialization complete. Launching workers. 00:47:46.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12679, failed: 8 00:47:46.787 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2786, failed to submit 9901 00:47:46.787 success 772, unsuccessful 2014, failed 0 00:47:46.787 13:18:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:46.787 13:18:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:46.787 [2024-11-28 13:18:16.669966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:672 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:47:46.787 [2024-11-28 13:18:16.670006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:47:46.787 [2024-11-28 13:18:16.707083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:1576 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:47:46.787 [2024-11-28 13:18:16.707107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00ce p:1 m:0 dnr:0 00:47:46.787 [2024-11-28 13:18:16.786100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3416 len:8 PRP1 0x200004e40000 PRP2 0x0 00:47:46.787 [2024-11-28 13:18:16.786124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00b5 p:0 m:0 dnr:0 00:47:46.787 [2024-11-28 13:18:16.801360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3776 len:8 PRP1 0x200004e44000 PRP2 0x0 00:47:46.787 [2024-11-28 13:18:16.801383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00dd p:0 m:0 dnr:0 00:47:47.047 [2024-11-28 13:18:17.082350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:10152 len:8 PRP1 0x200004e40000 PRP2 0x0 00:47:47.047 [2024-11-28 13:18:17.082379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:47:50.345 Initializing NVMe Controllers 00:47:50.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:50.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:50.345 Initialization complete. Launching workers. 00:47:50.345 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8603, failed: 5 00:47:50.345 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7366 00:47:50.345 success 343, unsuccessful 899, failed 0 00:47:50.345 13:18:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:50.345 13:18:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:50.345 [2024-11-28 13:18:20.407376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:170 nsid:1 lba:40600 len:8 PRP1 0x200004abe000 PRP2 0x0 00:47:50.345 [2024-11-28 13:18:20.407411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:170 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:47:52.256 [2024-11-28 13:18:22.253006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:170 nsid:1 lba:255424 len:8 PRP1 0x200004b24000 PRP2 0x0 00:47:52.256 [2024-11-28 13:18:22.253030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:170 cdw0:0 sqhd:0056 p:1 m:0 dnr:0 00:47:53.199 Initializing NVMe Controllers 00:47:53.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:53.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:53.199 Initialization complete. Launching workers. 00:47:53.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43511, failed: 2 00:47:53.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2681, failed to submit 40832 00:47:53.199 success 588, unsuccessful 2093, failed 0 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.199 13:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3811903 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3811903 ']' 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3811903 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3811903 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:55.113 13:18:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3811903' 00:47:55.113 killing process with pid 3811903 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3811903 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3811903 00:47:55.113 00:47:55.113 real 0m12.479s 00:47:55.113 user 0m50.467s 00:47:55.113 sys 0m2.019s 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:55.113 ************************************ 00:47:55.113 END TEST spdk_target_abort 00:47:55.113 ************************************ 00:47:55.113 13:18:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:47:55.113 13:18:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:55.113 13:18:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:55.113 13:18:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:55.113 ************************************ 00:47:55.113 START TEST kernel_target_abort 00:47:55.113 ************************************ 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:55.113 13:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:58.411 Waiting for block devices as requested 00:47:58.671 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:58.671 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:58.671 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:58.671 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:58.932 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:58.932 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:58.932 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:59.192 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:59.192 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:59.454 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:59.454 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:59.454 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:59.717 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:59.717 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:59.717 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:59.978 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:59.978 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:48:00.238 No valid GPT data, bailing 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:48:00.238 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:48:00.498 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:48:00.498 00:48:00.498 Discovery Log Number of Records 2, Generation counter 2 00:48:00.498 =====Discovery Log Entry 0====== 00:48:00.498 trtype: tcp 00:48:00.498 adrfam: ipv4 00:48:00.498 subtype: current discovery subsystem 00:48:00.498 treq: not specified, sq flow control disable supported 00:48:00.498 portid: 1 00:48:00.498 trsvcid: 4420 00:48:00.498 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:48:00.498 traddr: 10.0.0.1 00:48:00.498 eflags: none 00:48:00.498 sectype: none 00:48:00.498 =====Discovery Log Entry 1====== 00:48:00.498 trtype: tcp 00:48:00.498 adrfam: ipv4 00:48:00.498 subtype: nvme subsystem 00:48:00.498 treq: not specified, sq flow control disable supported 00:48:00.498 portid: 1 00:48:00.498 trsvcid: 4420 00:48:00.498 subnqn: nqn.2016-06.io.spdk:testnqn 00:48:00.498 traddr: 10.0.0.1 00:48:00.498 eflags: none 00:48:00.498 sectype: none 00:48:00.498 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:48:00.498 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:00.499 13:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:03.801 Initializing NVMe Controllers 00:48:03.802 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:03.802 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:03.802 Initialization complete. Launching workers. 00:48:03.802 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67173, failed: 0 00:48:03.802 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67173, failed to submit 0 00:48:03.802 success 0, unsuccessful 67173, failed 0 00:48:03.802 13:18:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:03.802 13:18:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:07.114 Initializing NVMe Controllers 00:48:07.114 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:07.114 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:07.114 Initialization complete. Launching workers. 00:48:07.114 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 115763, failed: 0 00:48:07.115 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29146, failed to submit 86617 00:48:07.115 success 0, unsuccessful 29146, failed 0 00:48:07.115 13:18:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:07.115 13:18:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:10.415 Initializing NVMe Controllers 00:48:10.415 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:10.415 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:10.415 Initialization complete. Launching workers. 00:48:10.415 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146181, failed: 0 00:48:10.415 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36586, failed to submit 109595 00:48:10.415 success 0, unsuccessful 36586, failed 0 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:48:10.415 13:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:13.719 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:48:13.719 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:48:15.634 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:48:15.897 00:48:15.897 real 0m20.736s 00:48:15.897 user 0m9.982s 00:48:15.897 sys 0m6.105s 00:48:15.897 13:18:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:15.897 13:18:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:48:15.897 ************************************ 00:48:15.897 END TEST kernel_target_abort 00:48:15.897 ************************************ 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:15.897 13:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:15.897 rmmod nvme_tcp 00:48:15.897 rmmod nvme_fabrics 00:48:15.897 rmmod nvme_keyring 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3811903 ']' 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3811903 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3811903 ']' 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3811903 00:48:16.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3811903) - No such process 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3811903 is not found' 00:48:16.160 Process with pid 3811903 is not found 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:48:16.160 13:18:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:48:19.468 Waiting for block devices as requested 00:48:19.468 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:19.468 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:19.729 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:19.729 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:19.729 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:19.990 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:19.990 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:19.990 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:20.251 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:48:20.251 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:20.251 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:20.511 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:20.511 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:20.511 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:20.773 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:20.773 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:20.774 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:21.035 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:21.035 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:21.035 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:21.296 13:18:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:23.213 13:18:53 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:23.213 00:48:23.213 real 0m53.013s 00:48:23.213 user 1m5.854s 00:48:23.213 sys 0m19.178s 00:48:23.213 13:18:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:23.213 13:18:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:48:23.213 ************************************ 00:48:23.214 END TEST nvmf_abort_qd_sizes 00:48:23.214 ************************************ 00:48:23.214 13:18:53 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:23.214 13:18:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:48:23.214 13:18:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:23.214 13:18:53 -- common/autotest_common.sh@10 -- # set +x 00:48:23.214 ************************************ 00:48:23.214 START TEST keyring_file 00:48:23.214 ************************************ 00:48:23.475 13:18:53 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:23.475 * Looking for test storage... 00:48:23.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:23.475 13:18:53 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:23.475 13:18:53 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:48:23.475 13:18:53 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:23.475 13:18:53 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:23.475 13:18:53 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@345 -- # : 1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@353 -- # local d=1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@355 -- # echo 1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@353 -- # local d=2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@355 -- # echo 2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@368 -- # return 0 00:48:23.476 13:18:53 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:23.476 13:18:53 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:23.476 --rc genhtml_branch_coverage=1 00:48:23.476 --rc genhtml_function_coverage=1 00:48:23.476 --rc genhtml_legend=1 00:48:23.476 --rc geninfo_all_blocks=1 00:48:23.476 --rc geninfo_unexecuted_blocks=1 00:48:23.476 00:48:23.476 ' 00:48:23.476 13:18:53 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:23.476 --rc genhtml_branch_coverage=1 00:48:23.476 --rc genhtml_function_coverage=1 00:48:23.476 --rc genhtml_legend=1 00:48:23.476 --rc geninfo_all_blocks=1 00:48:23.476 --rc geninfo_unexecuted_blocks=1 00:48:23.476 00:48:23.476 ' 00:48:23.476 13:18:53 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:23.476 --rc genhtml_branch_coverage=1 00:48:23.476 --rc genhtml_function_coverage=1 00:48:23.476 --rc genhtml_legend=1 00:48:23.476 --rc geninfo_all_blocks=1 00:48:23.476 --rc geninfo_unexecuted_blocks=1 00:48:23.476 00:48:23.476 ' 00:48:23.476 13:18:53 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:23.476 --rc genhtml_branch_coverage=1 00:48:23.476 --rc genhtml_function_coverage=1 00:48:23.476 --rc genhtml_legend=1 00:48:23.476 --rc geninfo_all_blocks=1 00:48:23.476 --rc geninfo_unexecuted_blocks=1 00:48:23.476 00:48:23.476 ' 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:23.476 13:18:53 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:23.476 13:18:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:23.476 13:18:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:23.476 13:18:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:23.476 13:18:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:48:23.476 13:18:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@51 -- # : 0 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:23.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:48:23.476 13:18:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6JyPBtxGAs 00:48:23.476 13:18:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:23.476 13:18:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6JyPBtxGAs 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6JyPBtxGAs 00:48:23.739 13:18:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6JyPBtxGAs 00:48:23.739 13:18:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Wp4UzGHVGV 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:23.739 13:18:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:23.739 13:18:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:23.739 13:18:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:23.739 13:18:53 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:23.739 13:18:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:23.739 13:18:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Wp4UzGHVGV 00:48:23.739 13:18:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Wp4UzGHVGV 00:48:23.739 13:18:53 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Wp4UzGHVGV 00:48:23.739 13:18:53 keyring_file -- keyring/file.sh@30 -- # tgtpid=3822315 00:48:23.739 13:18:53 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3822315 00:48:23.739 13:18:53 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:23.739 13:18:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3822315 ']' 00:48:23.739 13:18:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:23.739 13:18:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:23.739 13:18:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:23.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:23.739 13:18:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:23.739 13:18:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:23.739 [2024-11-28 13:18:53.768509] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:48:23.739 [2024-11-28 13:18:53.768584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822315 ] 00:48:24.000 [2024-11-28 13:18:53.905760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:24.000 [2024-11-28 13:18:53.966332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:24.001 [2024-11-28 13:18:53.995213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:48:24.572 13:18:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:24.572 [2024-11-28 13:18:54.583275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:24.572 null0 00:48:24.572 [2024-11-28 13:18:54.615245] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:24.572 [2024-11-28 13:18:54.615606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:24.572 13:18:54 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:24.572 [2024-11-28 13:18:54.647221] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:48:24.572 request: 00:48:24.572 { 00:48:24.572 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:48:24.572 "secure_channel": false, 00:48:24.572 "listen_address": { 00:48:24.572 "trtype": "tcp", 00:48:24.572 "traddr": "127.0.0.1", 00:48:24.572 "trsvcid": "4420" 00:48:24.572 }, 00:48:24.572 "method": "nvmf_subsystem_add_listener", 00:48:24.572 "req_id": 1 00:48:24.572 } 00:48:24.572 Got JSON-RPC error response 00:48:24.572 response: 00:48:24.572 { 00:48:24.572 "code": -32602, 00:48:24.572 "message": "Invalid parameters" 00:48:24.572 } 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:24.572 13:18:54 keyring_file -- keyring/file.sh@47 -- # bperfpid=3822443 00:48:24.572 13:18:54 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3822443 /var/tmp/bperf.sock 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3822443 ']' 00:48:24.572 13:18:54 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:24.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:24.572 13:18:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:24.833 [2024-11-28 13:18:54.710411] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:48:24.833 [2024-11-28 13:18:54.710481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822443 ] 00:48:24.833 [2024-11-28 13:18:54.846821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:24.833 [2024-11-28 13:18:54.906015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:24.833 [2024-11-28 13:18:54.933665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:25.418 13:18:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:25.418 13:18:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:48:25.418 13:18:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:25.418 13:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:25.680 13:18:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Wp4UzGHVGV 00:48:25.680 13:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Wp4UzGHVGV 00:48:25.941 13:18:55 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:48:25.941 13:18:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:48:25.941 13:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:25.941 13:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:25.941 13:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:26.202 13:18:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.6JyPBtxGAs == \/\t\m\p\/\t\m\p\.\6\J\y\P\B\t\x\G\A\s ]] 00:48:26.202 13:18:56 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:48:26.202 13:18:56 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:26.202 13:18:56 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Wp4UzGHVGV == \/\t\m\p\/\t\m\p\.\W\p\4\U\z\G\H\V\G\V ]] 00:48:26.202 13:18:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:26.202 13:18:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:26.463 13:18:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:48:26.463 13:18:56 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:48:26.463 13:18:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:26.463 13:18:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:26.463 13:18:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:26.463 13:18:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:26.463 13:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:26.724 13:18:56 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:48:26.724 13:18:56 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:26.724 13:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:26.724 [2024-11-28 13:18:56.819485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:26.984 nvme0n1 00:48:26.984 13:18:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:48:26.984 13:18:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:26.984 13:18:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:26.984 13:18:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:26.984 13:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:26.984 13:18:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:26.984 13:18:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:48:26.984 13:18:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:48:27.244 13:18:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:27.244 13:18:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:27.244 13:18:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.244 13:18:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:27.244 13:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.244 13:18:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:48:27.244 13:18:57 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:27.506 Running I/O for 1 seconds... 00:48:28.447 16802.00 IOPS, 65.63 MiB/s 00:48:28.447 Latency(us) 00:48:28.447 [2024-11-28T12:18:58.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:28.447 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:48:28.447 nvme0n1 : 1.00 16868.41 65.89 0.00 0.00 7574.63 2353.87 17407.66 00:48:28.447 [2024-11-28T12:18:58.574Z] =================================================================================================================== 00:48:28.447 [2024-11-28T12:18:58.574Z] Total : 16868.41 65.89 0.00 0.00 7574.63 2353.87 17407.66 00:48:28.447 { 00:48:28.447 "results": [ 00:48:28.447 { 00:48:28.447 "job": "nvme0n1", 00:48:28.447 "core_mask": "0x2", 00:48:28.447 "workload": "randrw", 00:48:28.447 "percentage": 50, 00:48:28.447 "status": "finished", 00:48:28.447 "queue_depth": 128, 00:48:28.447 "io_size": 4096, 00:48:28.447 "runtime": 1.00377, 00:48:28.447 "iops": 16868.406108969186, 00:48:28.447 "mibps": 65.89221136316088, 00:48:28.447 "io_failed": 0, 00:48:28.447 "io_timeout": 0, 00:48:28.447 "avg_latency_us": 7574.630446078255, 00:48:28.447 "min_latency_us": 2353.8656866020715, 00:48:28.447 "max_latency_us": 17407.657868359507 00:48:28.447 } 00:48:28.447 ], 00:48:28.447 "core_count": 1 00:48:28.447 } 00:48:28.447 13:18:58 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:28.447 13:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:28.708 13:18:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:28.708 13:18:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:48:28.708 13:18:58 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:28.708 13:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.970 13:18:58 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:48:28.970 13:18:58 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:28.970 13:18:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:28.970 13:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:29.309 [2024-11-28 13:18:59.158048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:29.309 [2024-11-28 13:18:59.158484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94b1a0 (107): Transport endpoint is not connected 00:48:29.309 [2024-11-28 13:18:59.159477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94b1a0 (9): Bad file descriptor 00:48:29.309 [2024-11-28 13:18:59.160477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:29.309 [2024-11-28 13:18:59.160485] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:29.309 [2024-11-28 13:18:59.160492] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:29.309 [2024-11-28 13:18:59.160498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:29.309 request: 00:48:29.309 { 00:48:29.309 "name": "nvme0", 00:48:29.309 "trtype": "tcp", 00:48:29.309 "traddr": "127.0.0.1", 00:48:29.309 "adrfam": "ipv4", 00:48:29.309 "trsvcid": "4420", 00:48:29.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:29.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:29.309 "prchk_reftag": false, 00:48:29.309 "prchk_guard": false, 00:48:29.309 "hdgst": false, 00:48:29.309 "ddgst": false, 00:48:29.309 "psk": "key1", 00:48:29.309 "allow_unrecognized_csi": false, 00:48:29.309 "method": "bdev_nvme_attach_controller", 00:48:29.309 "req_id": 1 00:48:29.309 } 00:48:29.309 Got JSON-RPC error response 00:48:29.309 response: 00:48:29.309 { 00:48:29.309 "code": -5, 00:48:29.309 "message": "Input/output error" 00:48:29.309 } 00:48:29.309 13:18:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:48:29.309 13:18:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:29.309 13:18:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:29.309 13:18:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:29.309 13:18:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:29.309 13:18:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:48:29.309 13:18:59 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:29.309 13:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:29.590 13:18:59 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:48:29.590 13:18:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:48:29.590 13:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:29.590 13:18:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:48:29.590 13:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:48:29.888 13:18:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:48:29.888 13:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:29.888 13:18:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:48:30.152 13:19:00 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:48:30.152 13:19:00 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.6JyPBtxGAs 00:48:30.152 13:19:00 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:30.152 13:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:30.152 [2024-11-28 13:19:00.180773] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6JyPBtxGAs': 0100660 00:48:30.152 [2024-11-28 13:19:00.180796] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:30.152 request: 00:48:30.152 { 00:48:30.152 "name": "key0", 00:48:30.152 "path": "/tmp/tmp.6JyPBtxGAs", 00:48:30.152 "method": "keyring_file_add_key", 00:48:30.152 "req_id": 1 00:48:30.152 } 00:48:30.152 Got JSON-RPC error response 00:48:30.152 response: 00:48:30.152 { 00:48:30.152 "code": -1, 00:48:30.152 "message": "Operation not permitted" 00:48:30.152 } 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:30.152 13:19:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:30.152 13:19:00 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.6JyPBtxGAs 00:48:30.152 13:19:00 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:30.152 13:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6JyPBtxGAs 00:48:30.412 13:19:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.6JyPBtxGAs 00:48:30.412 13:19:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:48:30.412 13:19:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:30.412 13:19:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:30.412 13:19:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.413 13:19:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:30.413 13:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.673 13:19:00 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:48:30.673 13:19:00 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:30.673 13:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:30.673 [2024-11-28 13:19:00.704879] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6JyPBtxGAs': No such file or directory 00:48:30.673 [2024-11-28 13:19:00.704892] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:48:30.673 [2024-11-28 13:19:00.704905] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:48:30.673 [2024-11-28 13:19:00.704911] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:48:30.673 [2024-11-28 13:19:00.704916] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:48:30.673 [2024-11-28 13:19:00.704922] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:48:30.673 request: 00:48:30.673 { 00:48:30.673 "name": "nvme0", 00:48:30.673 "trtype": "tcp", 00:48:30.673 "traddr": "127.0.0.1", 00:48:30.673 "adrfam": "ipv4", 00:48:30.673 "trsvcid": "4420", 00:48:30.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:30.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:30.673 "prchk_reftag": false, 00:48:30.673 "prchk_guard": false, 00:48:30.673 "hdgst": false, 00:48:30.673 "ddgst": false, 00:48:30.673 "psk": "key0", 00:48:30.673 "allow_unrecognized_csi": false, 00:48:30.673 "method": "bdev_nvme_attach_controller", 00:48:30.673 "req_id": 1 00:48:30.673 } 00:48:30.673 Got JSON-RPC error response 00:48:30.673 response: 00:48:30.673 { 00:48:30.673 "code": -19, 00:48:30.673 "message": "No such device" 00:48:30.673 } 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:30.673 13:19:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:30.673 13:19:00 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:48:30.673 13:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:30.935 13:19:00 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YHWc2xMaFt 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:30.935 13:19:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:30.935 13:19:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:30.935 13:19:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:30.935 13:19:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:30.935 13:19:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:30.935 13:19:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YHWc2xMaFt 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YHWc2xMaFt 00:48:30.935 13:19:00 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.YHWc2xMaFt 00:48:30.935 13:19:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YHWc2xMaFt 00:48:30.935 13:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YHWc2xMaFt 00:48:31.196 13:19:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:31.196 13:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:31.457 nvme0n1 00:48:31.457 13:19:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:48:31.457 13:19:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:31.457 13:19:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:31.457 13:19:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:31.457 13:19:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:31.457 13:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:31.457 13:19:01 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:48:31.457 13:19:01 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:48:31.457 13:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:31.717 13:19:01 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:48:31.717 13:19:01 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:48:31.717 13:19:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:31.717 13:19:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:31.717 13:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:31.979 13:19:01 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:48:31.979 13:19:01 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:48:31.979 13:19:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:31.979 13:19:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:31.979 13:19:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:31.979 13:19:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:31.979 13:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:31.979 13:19:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:48:31.979 13:19:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:31.979 13:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:32.239 13:19:02 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:48:32.239 13:19:02 keyring_file -- keyring/file.sh@105 -- # jq length 00:48:32.239 13:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:32.499 13:19:02 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:48:32.499 13:19:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YHWc2xMaFt 00:48:32.499 13:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YHWc2xMaFt 00:48:32.499 13:19:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Wp4UzGHVGV 00:48:32.499 13:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Wp4UzGHVGV 00:48:32.760 13:19:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.760 13:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:33.024 nvme0n1 00:48:33.024 13:19:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:48:33.024 13:19:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:48:33.285 13:19:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:48:33.285 "subsystems": [ 00:48:33.285 { 00:48:33.285 "subsystem": "keyring", 00:48:33.285 "config": [ 00:48:33.285 { 00:48:33.285 "method": "keyring_file_add_key", 00:48:33.285 "params": { 00:48:33.285 "name": "key0", 00:48:33.285 "path": "/tmp/tmp.YHWc2xMaFt" 00:48:33.285 } 00:48:33.285 }, 00:48:33.285 { 00:48:33.285 "method": "keyring_file_add_key", 00:48:33.285 "params": { 00:48:33.285 "name": "key1", 00:48:33.285 "path": "/tmp/tmp.Wp4UzGHVGV" 00:48:33.285 } 00:48:33.285 } 00:48:33.285 ] 00:48:33.285 }, 00:48:33.285 { 00:48:33.285 "subsystem": "iobuf", 00:48:33.285 "config": [ 00:48:33.285 { 00:48:33.285 "method": "iobuf_set_options", 00:48:33.285 "params": { 00:48:33.285 "small_pool_count": 8192, 00:48:33.285 "large_pool_count": 1024, 00:48:33.285 "small_bufsize": 8192, 00:48:33.285 "large_bufsize": 135168, 00:48:33.285 "enable_numa": false 00:48:33.285 } 00:48:33.285 } 00:48:33.285 ] 00:48:33.285 }, 00:48:33.285 { 00:48:33.285 "subsystem": "sock", 00:48:33.285 "config": [ 00:48:33.285 { 00:48:33.285 "method": "sock_set_default_impl", 00:48:33.285 "params": { 00:48:33.285 "impl_name": "posix" 00:48:33.285 } 00:48:33.285 }, 00:48:33.285 { 00:48:33.285 "method": "sock_impl_set_options", 00:48:33.285 "params": { 00:48:33.285 "impl_name": "ssl", 00:48:33.285 "recv_buf_size": 4096, 00:48:33.285 "send_buf_size": 4096, 00:48:33.285 "enable_recv_pipe": true, 00:48:33.285 "enable_quickack": false, 00:48:33.285 "enable_placement_id": 0, 00:48:33.285 "enable_zerocopy_send_server": true, 00:48:33.285 "enable_zerocopy_send_client": false, 00:48:33.285 "zerocopy_threshold": 0, 00:48:33.285 "tls_version": 0, 00:48:33.285 "enable_ktls": false 00:48:33.285 } 00:48:33.285 }, 00:48:33.285 { 00:48:33.285 "method": "sock_impl_set_options", 00:48:33.285 "params": { 00:48:33.285 "impl_name": "posix", 00:48:33.285 "recv_buf_size": 2097152, 00:48:33.285 "send_buf_size": 2097152, 00:48:33.285 "enable_recv_pipe": true, 00:48:33.285 "enable_quickack": false, 00:48:33.285 "enable_placement_id": 0, 00:48:33.286 "enable_zerocopy_send_server": true, 00:48:33.286 "enable_zerocopy_send_client": false, 00:48:33.286 "zerocopy_threshold": 0, 00:48:33.286 "tls_version": 0, 00:48:33.286 "enable_ktls": false 00:48:33.286 } 00:48:33.286 } 00:48:33.286 ] 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "subsystem": "vmd", 00:48:33.286 "config": [] 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "subsystem": "accel", 00:48:33.286 "config": [ 00:48:33.286 { 00:48:33.286 "method": "accel_set_options", 00:48:33.286 "params": { 00:48:33.286 "small_cache_size": 128, 00:48:33.286 "large_cache_size": 16, 00:48:33.286 "task_count": 2048, 00:48:33.286 "sequence_count": 2048, 00:48:33.286 "buf_count": 2048 00:48:33.286 } 00:48:33.286 } 00:48:33.286 ] 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "subsystem": "bdev", 00:48:33.286 "config": [ 00:48:33.286 { 00:48:33.286 "method": "bdev_set_options", 00:48:33.286 "params": { 00:48:33.286 "bdev_io_pool_size": 65535, 00:48:33.286 "bdev_io_cache_size": 256, 00:48:33.286 "bdev_auto_examine": true, 00:48:33.286 "iobuf_small_cache_size": 128, 00:48:33.286 "iobuf_large_cache_size": 16 00:48:33.286 } 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "method": "bdev_raid_set_options", 00:48:33.286 "params": { 00:48:33.286 "process_window_size_kb": 1024, 00:48:33.286 "process_max_bandwidth_mb_sec": 0 00:48:33.286 } 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "method": "bdev_iscsi_set_options", 00:48:33.286 "params": { 00:48:33.286 "timeout_sec": 30 00:48:33.286 } 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "method": "bdev_nvme_set_options", 00:48:33.286 "params": { 00:48:33.286 "action_on_timeout": "none", 00:48:33.286 "timeout_us": 0, 00:48:33.286 "timeout_admin_us": 0, 00:48:33.286 "keep_alive_timeout_ms": 10000, 00:48:33.286 "arbitration_burst": 0, 00:48:33.286 "low_priority_weight": 0, 00:48:33.286 "medium_priority_weight": 0, 00:48:33.286 "high_priority_weight": 0, 00:48:33.286 "nvme_adminq_poll_period_us": 10000, 00:48:33.286 "nvme_ioq_poll_period_us": 0, 00:48:33.286 "io_queue_requests": 512, 00:48:33.286 "delay_cmd_submit": true, 00:48:33.286 "transport_retry_count": 4, 00:48:33.286 "bdev_retry_count": 3, 00:48:33.286 "transport_ack_timeout": 0, 00:48:33.286 "ctrlr_loss_timeout_sec": 0, 00:48:33.286 "reconnect_delay_sec": 0, 00:48:33.286 "fast_io_fail_timeout_sec": 0, 00:48:33.286 "disable_auto_failback": false, 00:48:33.286 "generate_uuids": false, 00:48:33.286 "transport_tos": 0, 00:48:33.286 "nvme_error_stat": false, 00:48:33.286 "rdma_srq_size": 0, 00:48:33.286 "io_path_stat": false, 00:48:33.286 "allow_accel_sequence": false, 00:48:33.286 "rdma_max_cq_size": 0, 00:48:33.286 "rdma_cm_event_timeout_ms": 0, 00:48:33.286 "dhchap_digests": [ 00:48:33.286 "sha256", 00:48:33.286 "sha384", 00:48:33.286 "sha512" 00:48:33.286 ], 00:48:33.286 "dhchap_dhgroups": [ 00:48:33.286 "null", 00:48:33.286 "ffdhe2048", 00:48:33.286 "ffdhe3072", 00:48:33.286 "ffdhe4096", 00:48:33.286 "ffdhe6144", 00:48:33.286 "ffdhe8192" 00:48:33.286 ] 00:48:33.286 } 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "method": "bdev_nvme_attach_controller", 00:48:33.286 "params": { 00:48:33.286 "name": "nvme0", 00:48:33.286 "trtype": "TCP", 00:48:33.286 "adrfam": "IPv4", 00:48:33.286 "traddr": "127.0.0.1", 00:48:33.286 "trsvcid": "4420", 00:48:33.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:33.286 "prchk_reftag": false, 00:48:33.286 "prchk_guard": false, 00:48:33.286 "ctrlr_loss_timeout_sec": 0, 00:48:33.286 "reconnect_delay_sec": 0, 00:48:33.286 "fast_io_fail_timeout_sec": 0, 00:48:33.286 "psk": "key0", 00:48:33.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:33.286 "hdgst": false, 00:48:33.286 "ddgst": false, 00:48:33.286 "multipath": "multipath" 00:48:33.286 } 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "method": "bdev_nvme_set_hotplug", 00:48:33.286 "params": { 00:48:33.286 "period_us": 100000, 00:48:33.286 "enable": false 00:48:33.286 } 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "method": "bdev_wait_for_examine" 00:48:33.286 } 00:48:33.286 ] 00:48:33.286 }, 00:48:33.286 { 00:48:33.286 "subsystem": "nbd", 00:48:33.286 "config": [] 00:48:33.286 } 00:48:33.286 ] 00:48:33.286 }' 00:48:33.286 13:19:03 keyring_file -- keyring/file.sh@115 -- # killprocess 3822443 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3822443 ']' 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3822443 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3822443 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3822443' 00:48:33.286 killing process with pid 3822443 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@973 -- # kill 3822443 00:48:33.286 Received shutdown signal, test time was about 1.000000 seconds 00:48:33.286 00:48:33.286 Latency(us) 00:48:33.286 [2024-11-28T12:19:03.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:33.286 [2024-11-28T12:19:03.413Z] =================================================================================================================== 00:48:33.286 [2024-11-28T12:19:03.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@978 -- # wait 3822443 00:48:33.286 13:19:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=3824256 00:48:33.286 13:19:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3824256 /var/tmp/bperf.sock 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3824256 ']' 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:33.286 13:19:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:33.287 13:19:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:48:33.287 13:19:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:33.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:33.287 13:19:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:33.287 13:19:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:48:33.287 "subsystems": [ 00:48:33.287 { 00:48:33.287 "subsystem": "keyring", 00:48:33.287 "config": [ 00:48:33.287 { 00:48:33.287 "method": "keyring_file_add_key", 00:48:33.287 "params": { 00:48:33.287 "name": "key0", 00:48:33.287 "path": "/tmp/tmp.YHWc2xMaFt" 00:48:33.287 } 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "method": "keyring_file_add_key", 00:48:33.287 "params": { 00:48:33.287 "name": "key1", 00:48:33.287 "path": "/tmp/tmp.Wp4UzGHVGV" 00:48:33.287 } 00:48:33.287 } 00:48:33.287 ] 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "subsystem": "iobuf", 00:48:33.287 "config": [ 00:48:33.287 { 00:48:33.287 "method": "iobuf_set_options", 00:48:33.287 "params": { 00:48:33.287 "small_pool_count": 8192, 00:48:33.287 "large_pool_count": 1024, 00:48:33.287 "small_bufsize": 8192, 00:48:33.287 "large_bufsize": 135168, 00:48:33.287 "enable_numa": false 00:48:33.287 } 00:48:33.287 } 00:48:33.287 ] 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "subsystem": "sock", 00:48:33.287 "config": [ 00:48:33.287 { 00:48:33.287 "method": "sock_set_default_impl", 00:48:33.287 "params": { 00:48:33.287 "impl_name": "posix" 00:48:33.287 } 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "method": "sock_impl_set_options", 00:48:33.287 "params": { 00:48:33.287 "impl_name": "ssl", 00:48:33.287 "recv_buf_size": 4096, 00:48:33.287 "send_buf_size": 4096, 00:48:33.287 "enable_recv_pipe": true, 00:48:33.287 "enable_quickack": false, 00:48:33.287 "enable_placement_id": 0, 00:48:33.287 "enable_zerocopy_send_server": true, 00:48:33.287 "enable_zerocopy_send_client": false, 00:48:33.287 "zerocopy_threshold": 0, 00:48:33.287 "tls_version": 0, 00:48:33.287 "enable_ktls": false 00:48:33.287 } 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "method": "sock_impl_set_options", 00:48:33.287 "params": { 00:48:33.287 "impl_name": "posix", 00:48:33.287 "recv_buf_size": 2097152, 00:48:33.287 "send_buf_size": 2097152, 00:48:33.287 "enable_recv_pipe": true, 00:48:33.287 "enable_quickack": false, 00:48:33.287 "enable_placement_id": 0, 00:48:33.287 "enable_zerocopy_send_server": true, 00:48:33.287 "enable_zerocopy_send_client": false, 00:48:33.287 "zerocopy_threshold": 0, 00:48:33.287 "tls_version": 0, 00:48:33.287 "enable_ktls": false 00:48:33.287 } 00:48:33.287 } 00:48:33.287 ] 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "subsystem": "vmd", 00:48:33.287 "config": [] 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "subsystem": "accel", 00:48:33.287 "config": [ 00:48:33.287 { 00:48:33.287 "method": "accel_set_options", 00:48:33.287 "params": { 00:48:33.287 "small_cache_size": 128, 00:48:33.287 "large_cache_size": 16, 00:48:33.287 "task_count": 2048, 00:48:33.287 "sequence_count": 2048, 00:48:33.287 "buf_count": 2048 00:48:33.287 } 00:48:33.287 } 00:48:33.287 ] 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "subsystem": "bdev", 00:48:33.287 "config": [ 00:48:33.287 { 00:48:33.287 "method": "bdev_set_options", 00:48:33.287 "params": { 00:48:33.287 "bdev_io_pool_size": 65535, 00:48:33.287 "bdev_io_cache_size": 256, 00:48:33.287 "bdev_auto_examine": true, 00:48:33.287 "iobuf_small_cache_size": 128, 00:48:33.287 "iobuf_large_cache_size": 16 00:48:33.287 } 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "method": "bdev_raid_set_options", 00:48:33.287 "params": { 00:48:33.287 "process_window_size_kb": 1024, 00:48:33.287 "process_max_bandwidth_mb_sec": 0 00:48:33.287 } 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "method": "bdev_iscsi_set_options", 00:48:33.287 "params": { 00:48:33.287 "timeout_sec": 30 00:48:33.287 } 00:48:33.287 }, 00:48:33.287 { 00:48:33.287 "method": "bdev_nvme_set_options", 00:48:33.287 "params": { 00:48:33.287 "action_on_timeout": "none", 00:48:33.287 "timeout_us": 0, 00:48:33.287 "timeout_admin_us": 0, 00:48:33.287 "keep_alive_timeout_ms": 10000, 00:48:33.287 "arbitration_burst": 0, 00:48:33.287 "low_priority_weight": 0, 00:48:33.287 "medium_priority_weight": 0, 00:48:33.287 "high_priority_weight": 0, 00:48:33.287 "nvme_adminq_poll_period_us": 10000, 00:48:33.287 "nvme_ioq_poll_period_us": 0, 00:48:33.287 "io_queue_requests": 512, 00:48:33.287 "delay_cmd_submit": true, 00:48:33.287 "transport_retry_count": 4, 00:48:33.287 "bdev_retry_count": 3, 00:48:33.287 "transport_ack_timeout": 0, 00:48:33.287 "ctrlr_loss_timeout_sec": 0, 00:48:33.287 "reconnect_delay_sec": 0, 00:48:33.287 "fast_io_fail_timeout_sec": 0, 00:48:33.287 "disable_auto_failback": false, 00:48:33.287 "generate_uuids": false, 00:48:33.287 "transport_tos": 0, 00:48:33.287 "nvme_error_stat": false, 00:48:33.287 "rdma_srq_size": 0, 00:48:33.287 "io_path_stat": false, 00:48:33.287 "allow_accel_sequence": false, 00:48:33.287 "rdma_max_cq_size": 0, 00:48:33.287 "rdma_cm_event_timeout_ms": 0, 00:48:33.287 "dhchap_digests": [ 00:48:33.287 "sha256", 00:48:33.287 "sha384", 00:48:33.287 "sha512" 00:48:33.287 ], 00:48:33.287 "dhchap_dhgroups": [ 00:48:33.287 "null", 00:48:33.287 "ffdhe2048", 00:48:33.287 "ffdhe3072", 00:48:33.287 "ffdhe4096", 00:48:33.288 "ffdhe6144", 00:48:33.288 "ffdhe8192" 00:48:33.288 ] 00:48:33.288 } 00:48:33.288 }, 00:48:33.288 { 00:48:33.288 "method": "bdev_nvme_attach_controller", 00:48:33.288 "params": { 00:48:33.288 "name": "nvme0", 00:48:33.288 "trtype": "TCP", 00:48:33.288 "adrfam": "IPv4", 00:48:33.288 "traddr": "127.0.0.1", 00:48:33.288 "trsvcid": "4420", 00:48:33.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:33.288 "prchk_reftag": false, 00:48:33.288 "prchk_guard": false, 00:48:33.288 "ctrlr_loss_timeout_sec": 0, 00:48:33.288 "reconnect_delay_sec": 0, 00:48:33.288 "fast_io_fail_timeout_sec": 0, 00:48:33.288 "psk": "key0", 00:48:33.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:33.288 "hdgst": false, 00:48:33.288 "ddgst": false, 00:48:33.288 "multipath": "multipath" 00:48:33.288 } 00:48:33.288 }, 00:48:33.288 { 00:48:33.288 "method": "bdev_nvme_set_hotplug", 00:48:33.288 "params": { 00:48:33.288 "period_us": 100000, 00:48:33.288 "enable": false 00:48:33.288 } 00:48:33.288 }, 00:48:33.288 { 00:48:33.288 "method": "bdev_wait_for_examine" 00:48:33.288 } 00:48:33.288 ] 00:48:33.288 }, 00:48:33.288 { 00:48:33.288 "subsystem": "nbd", 00:48:33.288 "config": [] 00:48:33.288 } 00:48:33.288 ] 00:48:33.288 }' 00:48:33.288 13:19:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:33.548 [2024-11-28 13:19:03.432969] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:48:33.548 [2024-11-28 13:19:03.433036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824256 ] 00:48:33.548 [2024-11-28 13:19:03.564993] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:33.548 [2024-11-28 13:19:03.620462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:33.548 [2024-11-28 13:19:03.636611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:33.808 [2024-11-28 13:19:03.774610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:34.379 13:19:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:34.379 13:19:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:48:34.379 13:19:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:48:34.379 13:19:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:48:34.379 13:19:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:34.379 13:19:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:48:34.379 13:19:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:48:34.379 13:19:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:34.379 13:19:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:34.379 13:19:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:34.379 13:19:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:34.380 13:19:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:34.640 13:19:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:48:34.640 13:19:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:48:34.640 13:19:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:34.640 13:19:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:34.640 13:19:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:34.640 13:19:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:34.640 13:19:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:34.900 13:19:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:48:34.900 13:19:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:48:34.900 13:19:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:48:34.900 13:19:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:48:34.900 13:19:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:48:34.901 13:19:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:48:34.901 13:19:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YHWc2xMaFt /tmp/tmp.Wp4UzGHVGV 00:48:34.901 13:19:04 keyring_file -- keyring/file.sh@20 -- # killprocess 3824256 00:48:34.901 13:19:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3824256 ']' 00:48:34.901 13:19:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3824256 00:48:34.901 13:19:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:48:34.901 13:19:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:34.901 13:19:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824256 00:48:34.901 13:19:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:34.901 13:19:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:34.901 13:19:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824256' 00:48:34.901 killing process with pid 3824256 00:48:34.901 13:19:05 keyring_file -- common/autotest_common.sh@973 -- # kill 3824256 00:48:34.901 Received shutdown signal, test time was about 1.000000 seconds 00:48:34.901 00:48:34.901 Latency(us) 00:48:34.901 [2024-11-28T12:19:05.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:34.901 [2024-11-28T12:19:05.028Z] =================================================================================================================== 00:48:34.901 [2024-11-28T12:19:05.028Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:34.901 13:19:05 keyring_file -- common/autotest_common.sh@978 -- # wait 3824256 00:48:35.161 13:19:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3822315 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3822315 ']' 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3822315 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3822315 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3822315' 00:48:35.161 killing process with pid 3822315 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@973 -- # kill 3822315 00:48:35.161 13:19:05 keyring_file -- common/autotest_common.sh@978 -- # wait 3822315 00:48:35.421 00:48:35.421 real 0m12.018s 00:48:35.421 user 0m28.721s 00:48:35.421 sys 0m2.737s 00:48:35.421 13:19:05 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:35.421 13:19:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:35.421 ************************************ 00:48:35.421 END TEST keyring_file 00:48:35.421 ************************************ 00:48:35.421 13:19:05 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:48:35.421 13:19:05 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:35.421 13:19:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:35.421 13:19:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:35.421 13:19:05 -- common/autotest_common.sh@10 -- # set +x 00:48:35.421 ************************************ 00:48:35.421 START TEST keyring_linux 00:48:35.421 ************************************ 00:48:35.421 13:19:05 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:35.421 Joined session keyring: 299942581 00:48:35.421 * Looking for test storage... 00:48:35.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:35.421 13:19:05 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:35.421 13:19:05 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:48:35.681 13:19:05 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:35.681 13:19:05 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:35.681 13:19:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:48:35.681 13:19:05 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:35.681 13:19:05 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:35.681 --rc genhtml_branch_coverage=1 00:48:35.681 --rc genhtml_function_coverage=1 00:48:35.681 --rc genhtml_legend=1 00:48:35.681 --rc geninfo_all_blocks=1 00:48:35.681 --rc geninfo_unexecuted_blocks=1 00:48:35.681 00:48:35.681 ' 00:48:35.681 13:19:05 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:35.681 --rc genhtml_branch_coverage=1 00:48:35.681 --rc genhtml_function_coverage=1 00:48:35.682 --rc genhtml_legend=1 00:48:35.682 --rc geninfo_all_blocks=1 00:48:35.682 --rc geninfo_unexecuted_blocks=1 00:48:35.682 00:48:35.682 ' 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:35.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:35.682 --rc genhtml_branch_coverage=1 00:48:35.682 --rc genhtml_function_coverage=1 00:48:35.682 --rc genhtml_legend=1 00:48:35.682 --rc geninfo_all_blocks=1 00:48:35.682 --rc geninfo_unexecuted_blocks=1 00:48:35.682 00:48:35.682 ' 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:35.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:35.682 --rc genhtml_branch_coverage=1 00:48:35.682 --rc genhtml_function_coverage=1 00:48:35.682 --rc genhtml_legend=1 00:48:35.682 --rc geninfo_all_blocks=1 00:48:35.682 --rc geninfo_unexecuted_blocks=1 00:48:35.682 00:48:35.682 ' 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:35.682 13:19:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:48:35.682 13:19:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:35.682 13:19:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:35.682 13:19:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:35.682 13:19:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:35.682 13:19:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:35.682 13:19:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:35.682 13:19:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:48:35.682 13:19:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:35.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:48:35.682 /tmp/:spdk-test:key0 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:35.682 13:19:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:48:35.682 13:19:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:48:35.682 /tmp/:spdk-test:key1 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3824697 00:48:35.682 13:19:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3824697 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3824697 ']' 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:35.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:35.682 13:19:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:35.943 [2024-11-28 13:19:05.807499] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:48:35.943 [2024-11-28 13:19:05.807556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824697 ] 00:48:35.943 [2024-11-28 13:19:05.940410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:35.943 [2024-11-28 13:19:05.995988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:35.943 [2024-11-28 13:19:06.012505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:36.513 13:19:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:36.513 13:19:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:48:36.513 13:19:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:48:36.513 13:19:06 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:36.513 13:19:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:36.513 [2024-11-28 13:19:06.601606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:36.513 null0 00:48:36.513 [2024-11-28 13:19:06.633590] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:36.513 [2024-11-28 13:19:06.633952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:36.774 13:19:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:48:36.774 405554323 00:48:36.774 13:19:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:48:36.774 153277030 00:48:36.774 13:19:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3824867 00:48:36.774 13:19:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:48:36.774 13:19:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3824867 /var/tmp/bperf.sock 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3824867 ']' 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:36.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:36.774 13:19:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:36.774 [2024-11-28 13:19:06.712383] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:48:36.774 [2024-11-28 13:19:06.712434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824867 ] 00:48:36.774 [2024-11-28 13:19:06.844880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:48:36.774 [2024-11-28 13:19:06.897123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:37.034 [2024-11-28 13:19:06.913527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:37.604 13:19:07 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:37.604 13:19:07 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:48:37.604 13:19:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:48:37.604 13:19:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:48:37.604 13:19:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:48:37.604 13:19:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:37.865 13:19:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:37.865 13:19:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:37.865 [2024-11-28 13:19:07.990218] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:38.125 nvme0n1 00:48:38.125 13:19:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:48:38.125 13:19:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:48:38.125 13:19:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:38.125 13:19:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:38.125 13:19:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:38.125 13:19:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:48:38.386 13:19:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:38.386 13:19:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:48:38.386 13:19:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@25 -- # sn=405554323 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 405554323 == \4\0\5\5\5\4\3\2\3 ]] 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 405554323 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:48:38.386 13:19:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:38.646 Running I/O for 1 seconds... 00:48:39.589 24162.00 IOPS, 94.38 MiB/s 00:48:39.589 Latency(us) 00:48:39.589 [2024-11-28T12:19:09.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:39.589 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:39.589 nvme0n1 : 1.01 24162.41 94.38 0.00 0.00 5281.46 3749.76 8156.42 00:48:39.589 [2024-11-28T12:19:09.716Z] =================================================================================================================== 00:48:39.589 [2024-11-28T12:19:09.716Z] Total : 24162.41 94.38 0.00 0.00 5281.46 3749.76 8156.42 00:48:39.590 { 00:48:39.590 "results": [ 00:48:39.590 { 00:48:39.590 "job": "nvme0n1", 00:48:39.590 "core_mask": "0x2", 00:48:39.590 "workload": "randread", 00:48:39.590 "status": "finished", 00:48:39.590 "queue_depth": 128, 00:48:39.590 "io_size": 4096, 00:48:39.590 "runtime": 1.005322, 00:48:39.590 "iops": 24162.407666399424, 00:48:39.590 "mibps": 94.38440494687275, 00:48:39.590 "io_failed": 0, 00:48:39.590 "io_timeout": 0, 00:48:39.590 "avg_latency_us": 5281.462936799426, 00:48:39.590 "min_latency_us": 3749.762779819579, 00:48:39.590 "max_latency_us": 8156.418309388573 00:48:39.590 } 00:48:39.590 ], 00:48:39.590 "core_count": 1 00:48:39.590 } 00:48:39.590 13:19:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:39.590 13:19:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:39.850 13:19:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:48:39.850 13:19:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:39.850 13:19:09 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:39.851 13:19:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:40.111 [2024-11-28 13:19:10.095440] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:40.111 [2024-11-28 13:19:10.096199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe08ed0 (107): Transport endpoint is not connected 00:48:40.111 [2024-11-28 13:19:10.097193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe08ed0 (9): Bad file descriptor 00:48:40.111 [2024-11-28 13:19:10.098192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:40.111 [2024-11-28 13:19:10.098200] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:40.111 [2024-11-28 13:19:10.098206] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:40.111 [2024-11-28 13:19:10.098213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:40.111 request: 00:48:40.111 { 00:48:40.111 "name": "nvme0", 00:48:40.111 "trtype": "tcp", 00:48:40.111 "traddr": "127.0.0.1", 00:48:40.111 "adrfam": "ipv4", 00:48:40.111 "trsvcid": "4420", 00:48:40.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:40.111 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:40.111 "prchk_reftag": false, 00:48:40.111 "prchk_guard": false, 00:48:40.111 "hdgst": false, 00:48:40.111 "ddgst": false, 00:48:40.111 "psk": ":spdk-test:key1", 00:48:40.111 "allow_unrecognized_csi": false, 00:48:40.111 "method": "bdev_nvme_attach_controller", 00:48:40.111 "req_id": 1 00:48:40.111 } 00:48:40.111 Got JSON-RPC error response 00:48:40.111 response: 00:48:40.111 { 00:48:40.111 "code": -5, 00:48:40.111 "message": "Input/output error" 00:48:40.111 } 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@33 -- # sn=405554323 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 405554323 00:48:40.111 1 links removed 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@33 -- # sn=153277030 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 153277030 00:48:40.111 1 links removed 00:48:40.111 13:19:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3824867 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3824867 ']' 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3824867 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824867 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824867' 00:48:40.111 killing process with pid 3824867 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 3824867 00:48:40.111 Received shutdown signal, test time was about 1.000000 seconds 00:48:40.111 00:48:40.111 Latency(us) 00:48:40.111 [2024-11-28T12:19:10.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:40.111 [2024-11-28T12:19:10.238Z] =================================================================================================================== 00:48:40.111 [2024-11-28T12:19:10.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:40.111 13:19:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 3824867 00:48:40.373 13:19:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3824697 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3824697 ']' 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3824697 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824697 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824697' 00:48:40.373 killing process with pid 3824697 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 3824697 00:48:40.373 13:19:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 3824697 00:48:40.634 00:48:40.634 real 0m5.105s 00:48:40.634 user 0m9.264s 00:48:40.634 sys 0m1.435s 00:48:40.634 13:19:10 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:40.634 13:19:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:40.634 ************************************ 00:48:40.634 END TEST keyring_linux 00:48:40.634 ************************************ 00:48:40.634 13:19:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:48:40.634 13:19:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:40.634 13:19:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:40.634 13:19:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:48:40.634 13:19:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:48:40.634 13:19:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:48:40.634 13:19:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:48:40.634 13:19:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:40.634 13:19:10 -- common/autotest_common.sh@10 -- # set +x 00:48:40.634 13:19:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:48:40.634 13:19:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:48:40.634 13:19:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:48:40.634 13:19:10 -- common/autotest_common.sh@10 -- # set +x 00:48:48.769 INFO: APP EXITING 00:48:48.769 INFO: killing all VMs 00:48:48.769 INFO: killing vhost app 00:48:48.769 WARN: no vhost pid file found 00:48:48.769 INFO: EXIT DONE 00:48:52.066 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:65:00.0 (144d a80a): Already using the nvme driver 00:48:52.066 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:48:52.066 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:48:56.268 Cleaning 00:48:56.268 Removing: /var/run/dpdk/spdk0/config 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:48:56.268 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:56.268 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:56.268 Removing: /var/run/dpdk/spdk1/config 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:48:56.269 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:48:56.269 Removing: /var/run/dpdk/spdk1/hugepage_info 00:48:56.269 Removing: /var/run/dpdk/spdk2/config 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:48:56.269 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:48:56.269 Removing: /var/run/dpdk/spdk2/hugepage_info 00:48:56.269 Removing: /var/run/dpdk/spdk3/config 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:48:56.269 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:48:56.269 Removing: /var/run/dpdk/spdk3/hugepage_info 00:48:56.269 Removing: /var/run/dpdk/spdk4/config 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:48:56.269 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:48:56.269 Removing: /var/run/dpdk/spdk4/hugepage_info 00:48:56.269 Removing: /dev/shm/bdev_svc_trace.1 00:48:56.269 Removing: /dev/shm/nvmf_trace.0 00:48:56.269 Removing: /dev/shm/spdk_tgt_trace.pid3145093 00:48:56.269 Removing: /var/run/dpdk/spdk0 00:48:56.269 Removing: /var/run/dpdk/spdk1 00:48:56.269 Removing: /var/run/dpdk/spdk2 00:48:56.269 Removing: /var/run/dpdk/spdk3 00:48:56.269 Removing: /var/run/dpdk/spdk4 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3143530 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3145093 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3145879 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3146916 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3147260 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3148328 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3148663 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3148877 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3149947 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3150729 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3151122 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3151518 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3151893 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3152186 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3152382 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3152726 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3153115 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3154231 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3157796 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3158170 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3158524 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3158845 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3159231 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3159458 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3159907 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3159948 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3160317 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3160600 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3160689 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3161018 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3161468 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3161822 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3162224 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3166745 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3172132 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3184228 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3184918 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3190621 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3191228 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3196311 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3203404 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3206695 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3219356 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3230262 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3232419 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3233446 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3255019 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3259785 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3359142 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3365529 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3372712 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3380613 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3380621 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3381623 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3382735 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3383737 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3384869 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3384986 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3385227 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3385504 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3385541 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3386552 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3387558 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3388565 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3389231 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3389256 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3389574 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3391015 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3392408 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3402075 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3436556 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3441956 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3443962 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3446147 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3446328 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3446668 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3447012 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3447731 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3450067 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3451161 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3451868 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3454558 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3455284 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3456002 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3461055 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3468212 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3468214 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3468215 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3473018 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3477697 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3483369 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3527394 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3532125 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3539352 00:48:56.269 Removing: /var/run/dpdk/spdk_pid3540841 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3542681 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3544210 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3549901 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3555079 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3560188 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3569748 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3569804 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3574939 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3575172 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3575469 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3576023 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3576137 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3577494 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3579477 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3581363 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3583258 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3585195 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3587189 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3594569 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3595321 00:48:56.270 Removing: /var/run/dpdk/spdk_pid3596486 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3597722 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3603950 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3607565 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3614244 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3620528 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3630688 00:48:56.530 Removing: /var/run/dpdk/spdk_pid3639144 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3639243 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3662472 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3663161 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3663959 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3664765 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3665690 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3666478 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3667203 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3667942 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3673029 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3673344 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3680399 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3680616 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3687081 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3692196 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3703649 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3704362 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3709951 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3710301 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3715338 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3721979 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3724844 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3736976 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3747433 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3749362 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3750398 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3770530 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3775138 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3778401 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3785904 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3785932 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3792072 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3794317 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3796524 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3798023 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3800258 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3801746 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3812264 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3812875 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3813475 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3816255 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3816895 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3817563 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3822315 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3822443 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3824256 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3824697 00:48:56.531 Removing: /var/run/dpdk/spdk_pid3824867 00:48:56.531 Clean 00:48:56.792 13:19:26 -- common/autotest_common.sh@1453 -- # return 0 00:48:56.792 13:19:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:48:56.793 13:19:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:56.793 13:19:26 -- common/autotest_common.sh@10 -- # set +x 00:48:56.793 13:19:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:48:56.793 13:19:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:56.793 13:19:26 -- common/autotest_common.sh@10 -- # set +x 00:48:56.793 13:19:26 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:56.793 13:19:26 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:48:56.793 13:19:26 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:48:56.793 13:19:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:48:56.793 13:19:26 -- spdk/autotest.sh@398 -- # hostname 00:48:56.793 13:19:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:48:57.053 geninfo: WARNING: invalid characters removed from testname! 00:49:23.633 13:19:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:26.255 13:19:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:28.170 13:19:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:30.081 13:19:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:31.461 13:20:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:34.002 13:20:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:35.385 13:20:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:35.385 13:20:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:49:35.385 13:20:05 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:49:35.385 13:20:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:35.385 13:20:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:49:35.385 13:20:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:49:35.385 + [[ -n 3040601 ]] 00:49:35.385 + sudo kill 3040601 00:49:35.396 [Pipeline] } 00:49:35.409 [Pipeline] // stage 00:49:35.414 [Pipeline] } 00:49:35.427 [Pipeline] // timeout 00:49:35.431 [Pipeline] } 00:49:35.444 [Pipeline] // catchError 00:49:35.448 [Pipeline] } 00:49:35.461 [Pipeline] // wrap 00:49:35.467 [Pipeline] } 00:49:35.479 [Pipeline] // catchError 00:49:35.487 [Pipeline] stage 00:49:35.489 [Pipeline] { (Epilogue) 00:49:35.501 [Pipeline] catchError 00:49:35.504 [Pipeline] { 00:49:35.516 [Pipeline] echo 00:49:35.518 Cleanup processes 00:49:35.524 [Pipeline] sh 00:49:35.813 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:35.813 3838667 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:35.828 [Pipeline] sh 00:49:36.117 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:36.117 ++ grep -v 'sudo pgrep' 00:49:36.117 ++ awk '{print $1}' 00:49:36.117 + sudo kill -9 00:49:36.117 + true 00:49:36.130 [Pipeline] sh 00:49:36.419 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:48.663 [Pipeline] sh 00:49:48.952 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:48.952 Artifacts sizes are good 00:49:48.968 [Pipeline] archiveArtifacts 00:49:48.976 Archiving artifacts 00:49:49.193 [Pipeline] sh 00:49:49.507 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:49:49.525 [Pipeline] cleanWs 00:49:49.537 [WS-CLEANUP] Deleting project workspace... 00:49:49.537 [WS-CLEANUP] Deferred wipeout is used... 00:49:49.547 [WS-CLEANUP] done 00:49:49.549 [Pipeline] } 00:49:49.566 [Pipeline] // catchError 00:49:49.579 [Pipeline] sh 00:49:49.868 + logger -p user.info -t JENKINS-CI 00:49:49.880 [Pipeline] } 00:49:49.894 [Pipeline] // stage 00:49:49.900 [Pipeline] } 00:49:49.915 [Pipeline] // node 00:49:49.920 [Pipeline] End of Pipeline 00:49:49.960 Finished: SUCCESS